ESS with KVM virtualized protocol clusters
Introduction
This document describes how we configured a two site AFM-DR replicated environment with ESS5K storage clusters on both locations, and virtualized AFM gateways and protocol clusters serving fully isolated AD environments. The hardware used in the solution was the following, duplicated on both locations:
• ESS 5K
• EMS
• 2x IBM 5105-22E with 640 GByte memory, 1.x TB internal storage, 2x Mellanox Connect-X4 100 GbE network adapters
• 2x 100 GbE Mellanox switches (SN2700)
High-level overview of physical servers and network
The solution where spread over two locations, with direct fiber connection between switches on both locations:
FIXMEFIXMEFIXME include network diagram FIXMEFIXMEFIXME FIXMEFIXMEFIXME
The BMC and provisioning networks where local to each location, only high speed network was connected between the locations. This meant that we needed to duplicate the deployment services on ems-1-1 and ems-2-1 to cover both locations.
Each of the esskvm-servers were used for hosting KVM virtual machines for AFM gateways and protocol nodes. To allow for efficient 100 GbE networking on the VMs, we use SR-IOV capable network adapters to give direct access to the network adapters from the VMs. The installation of hypervisors and VMs was automated through Red Hat kickstart of base OS image, and ansible scripts for further configuration – minimizing the amount of local customizations on each node.
IP Plans
FIXME
Kickstart setup
We decided to use FTP protocol for file transfers, since it has less chance of conflicting with anything else running on the EMS. So we installed vsftpd, and enabled anonymous access:
# dnf install vsftpd
# sed -i "s/anonymous_allow=NO/anonymous_allow=YES/" /etc/vsftpd/vsftpd.conf
# systemctl enable vsftpd
# systemctl start vsftpd
We use RHEL-8.4 as base install, so we make a copy of the RHEL-8.4-server-dvd.iso under /var/ftp/rhel84 to use for the network install:
# mount rhel-8.4-ppc64le-dvd.iso /mnt
# rsync -av /mnt/ /var/ftp/rhel84
For hypervisor install, we first do a manual install, save a copy of /root/anaconda-ks.cfg, and customize it to use for automated installs. We use the following /var/ftp/kickstart.cfg for hypervisors:
#version=RHEL8
# Use text mode install
text
#
# Add AppStream Repo
repo --name="AppStream" --baseurl=ftp://192.168.45.21/rhel84/AppStream
#
#
%packages
@^server-product-environment
kexec-tools
#
%end
#
# System language
lang en_US.UTF-8
#
# Use network install
url --url="ftp://192.168.45.21/rhel84"
#
firstboot --enable
skipx
#
ignoredisk --only-use=sda
bootloader --append=" crashkernel=auto" --location=mbr --boot-drive=sda
clearpart --all --initlabel --drives=sda
part prepboot --fstype="prepboot" --ondisk=sda --size=8
part /boot --fstype=xfs --ondisk=sda --size=2000
part pv.03 --fstype="lvmpv" --ondisk=sda --size=16383 --grow
volgroup rootvg --pesize=4096 pv.03
logvol / --fstype=xfs --vgname=rootvg --size=100000 --name=rootlv
logvol swap --vgname=rootvg --size=16383 --name=swaplv
#
timezone Europe/Oslo --isUtc --ntpservers=192.168.45.21
rootpw --iscrypted $6$PFlyYtf18XMmlgLz$ME9.COMOd7il6q4Nl7tPmOLGsEgTeHKUytDryYJtCc9StT3tw42DvfIvOQV9ANV3m0SU8xg2u4dGpzR7qMQ1d.
reboot
#
keyboard --vckeymap=us --xlayouts=""
services --enable="chronyd"
#
%addon com_redhat_kdump --enable --reserve-mb="auto"
#
%end
#
%anaconda
pwpolicy root --minlen=6 --minquality=1 --notstrict --nochanges --notempty
pwpolicy user --minlen=6 --minquality=1 --notstrict --nochanges --emptyok
pwpolicy luks --minlen=6 --minquality=1 --notstrict --nochanges --notempty
%end
#
%post
#
hostnamectl set-hostname $(hostnamectl --transient)
#
cat <<EOF >> /etc/fstab
tmpfs /tmp tmpfs mode=1777,nodev,nosuid,size=8589934592 0 0
EOF
#
%end
#
and /var/ftp/kickstart-vm.cfg for virtual machines. The difference between these is the use of vda instead of sda, an use much smaller disk allocation for the / file system on the VMs.
#version=RHEL8
# Use text mode install
text
#
repo --name="AppStream" --baseurl=ftp://192.168.45.21/rhel84/AppStream
#
%packages
@^server-product-environment
kexec-tools
#
%end
#
# System language
lang en_US.UTF-8
#
# Use network install
url --url="ftp://192.168.45.21/rhel84"
#
firstboot --enable
skipx
#
ignoredisk --only-use=vda
bootloader --append=" crashkernel=auto" --location=mbr --boot-drive=vda
clearpart --all --initlabel --drives=vda
part prepboot --fstype="prepboot" --ondisk=vda --size=8
part /boot --fstype=xfs --ondisk=vda --size=2000
part pv.03 --fstype="lvmpv" --ondisk=vda --size=16383 --grow
volgroup rootvg --pesize=4096 pv.03
logvol / --fstype=xfs --vgname=rootvg --size=50000 --name=rootlv
logvol swap --vgname=rootvg --size=16383 --name=swaplv
#
timezone Europe/Oslo --isUtc --ntpservers=192.168.45.21
rootpw --iscrypted $6$PFlyYtf18XMmlgLz$ME9.COMOd7il6q4Nl7tPmOLGsEgTeHKUytDryYJtCc9StT3tw42DvfIvOQV9ANV3m0SU8xg2u4dGpzR7qMQ1d.
reboot
#
keyboard --vckeymap=us --xlayouts=""
services --enable="chronyd"
#
%addon com_redhat_kdump --enable --reserve-mb="auto"
#
%end
#
%anaconda
pwpolicy root --minlen=6 --minquality=1 --notstrict --nochanges --notempty
pwpolicy user --minlen=6 --minquality=1 --notstrict --nochanges --emptyok
pwpolicy luks --minlen=6 --minquality=1 --notstrict --nochanges --notempty
%end
#
%post
#
hostnamectl set-hostname $(hostnamectl --transient).acme.com
#
cat <<EOF >> /etc/fstab
tmpfs /tmp tmpfs mode=1777,nodev,nosuid,size=8589934592 0 0
EOF
#
%end
PXE boot
For some reason I failed to boot this machine from PXE, but I’ll document the PXE server setup anyways… or will I? Problem was probably that petitboot had different scheme for finding the pxeboot image than what we have from the SMS menu. Dig more into this later…
Petitboot
To install the hypervisor, the best is to do a power on/off and connect to serial-over-lan console using IPMI:
# ipmitool -I lanplus -H BMC-IP-ADDRESS -P PASSWORD power off
# ipmitool -I lanplus -H BMC-IP-ADDRESS -P PASSWORD power status
# ipmitool -I lanplus -H BMC-IP-ADDRESS -P PASSWORD power on
# ipmitool -I lanplus -H BMC-IP-ADDRESS -P PASSWORD power status
# ipmitool -I lanplus -H BMC-IP-ADDRESS -P PASSWORD sol activate
after a while this should bring up the Petitboot menu. From that menu, select ”System Configuration”, ”Static IP configuration” and configure an IP address on the interface you want to boot from. Then from the main menu, create a new menu entry by entering ”n” and fill in:
Kernel: ftp//192.168.45.21/rhel84/ppc/ppc64/vmlinuz
Initrd: ftp://192.168.45.21/rhel84/ppc/ppc64/initrd.img
Boot arguments: inst.ks=ftp//192.168.45.21/kickstart.cfg inst.repo=ftp//192.168.45.21/rhel84 ip=FIXME
Then boot this new entry, and the base OS install should be done.
SR-IOV
One SR-IOV Virtual Function needs to be configured for each interface we want to assign to the VMs. For the Spectrum Scale daemon network, we will need a virtual adapter on the physical interfaces enP51p1s0f0 and enp1s0f0. For our current solution, each hypervisor will need to host up to 5 KVM guests, so we need 5 VFs per physical adapter. This is configured through:
# echo 5 > /sys/class/net/enP51p1s0f0/device/sriov_numvfs
# echo 5 > /sys/class/net/enp1s0f0/device/sriov_numvfs
Then we need to assign a MAC address for each of the VFs. We decide to use the following scheme for assigning MAC addresses:
esskvm-1-1:
1a:1a:1a:ab:ba:*
3a:3a:3a:ab:ba:*
esskvm-1-2:
2a:2a:2a:ab:ba:*
0a:0a:0a:ab:ba:*
esskvm-2-1:
1a:1a:1a:de:ad:*
3a:3a:3a:de:ad:*
esskvm-2-2:
2a:2a:2a:de:ad:*
0a:0a:0a:de:ad:*
:ab:ba: for location 1, :de:ad: for location 2. Odd and even first digit for odd and even servers. The full set of commands then for configuring a set of VFs and MAC addresses are:
# First we enable the number of SR-IOV virtual functions we need:
echo 5 > /sys/class/net/enP51p1s0f0/device/sriov_numvfs
echo 5 > /sys/class/net/enp1s0f0/device/sriov_numvfs
#
# Then set the static mac addresses:
#
# --- enP51p1s0f0 ---
#
echo 0033:01:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
ip link set enP51p1s0f0 vf 0 mac 1a:1a:1a:de:ad:00
echo 0033:01:00.2 > /sys/bus/pci/drivers/mlx5_core/bind
#
echo 0033:01:00.3 > /sys/bus/pci/drivers/mlx5_core/unbind
ip link set enP51p1s0f0 vf 1 mac 1a:1a:1a:de:ad:01
echo 0033:01:00.3 > /sys/bus/pci/drivers/mlx5_core/bind
#
echo 0033:01:00.4 > /sys/bus/pci/drivers/mlx5_core/unbind
ip link set enP51p1s0f0 vf 2 mac 1a:1a:1a:de:ad:02
echo 0033:01:00.4 > /sys/bus/pci/drivers/mlx5_core/bind
<snip>
#
# Define MTUs on hypervisor
ip link set enP51p1s0f0v0 mtu 9000
ip link set enP51p1s0f0v1 mtu 9000
ip link set enP51p1s0f0v2 mtu 9000
<snip>
The PCIe devices for adding to KVM guests can then be identified through virsh nodedev-list. These will be used when assigning the devices to the KVM guests.
# virsh nodedev-list|grep _de_ad_
net_enp1s0f0v0_0a_0a_0a_de_ad_00
net_enp1s0f0v1_0a_0a_0a_de_ad_01
net_enp1s0f0v2_0a_0a_0a_de_ad_02
net_enp1s0f0v3_0a_0a_0a_de_ad_03
net_enp1s0f0v4_0a_0a_0a_de_ad_04
net_enP51p1s0f0v0_2a_2a_2a_de_ad_00
net_enP51p1s0f0v1_2a_2a_2a_de_ad_01
net_enP51p1s0f0v2_2a_2a_2a_de_ad_02
net_enP51p1s0f0v3_2a_2a_2a_de_ad_03
net_enP51p1s0f0v4_2a_2a_2a_de_ad_04
Bridge interface for provisioning network
After the hypervisor is installed, we need to reconfigure the interface on the provisioning network to be a bridge device, so that the VMs can use it to access the provisioning network. This is done by logging into the hypervisor via IPMI, and running the following set of commands (correct the IP address):
nmcli con del enP1p8s0f0
nmcli con add type bridge con-name bridge-br0 ifname br0 ip4 192.168.45.9/24
nmcli con add type bridge-slave ifname enP1p8s0f0 master br0 con-name bridge-slave-enP1p8s0f0
Ansible setup
After the base OS is installed, we will use ansible to perform additional setup. This includes bonding the high speed interfaces, install needed packages, send syslogs to EMS, and more.
First we configure the ansible environment by creating the folder /root/ansible and the file /root/ansible/ansible.cfg containing:
[defaults]
inventory = hosts
host_key_checking = False
#
[appnodes:vars]
ansible_connection=ssh
ansible_user=root
ansible_ssh_pass=ibmesscluster
then /root/ansible/hosts listi
[hypervisors]
esskvm-1-1 ansible_host=192.168.45.24
esskvm-1-2 ansible_host=192.168.45.25
#
[kvmguests]
essafm-2-1 ansible_host=192.168.45.136
essafm-2-2 ansible_host=192.168.45.137
FIXME: the following has quotation issues. And where
FIXME: the following has quotation issues. And where did the empty lines go?
- hosts: all
vars:
hyperpackages:
- tmux
- mstflint
- "@virt"
- virt-install
kvmguestpackages:
- python36
- numactl
- kernel-devel
- kernel-headers
- cpp
- gcc
- gcc-c++
- elfutils
- elfutils-devel
- make
handlers:
- name: Restart rsyslog
service:
name: rsyslog
state: restarted
- name: Reload systemd
ansible.builtin.systemd:
daemon_reload: yes
tasks:
- name: set hostname
hostname:
name: "{{ inventory_hostname }}.acme.com"
- name: Add RHEL BaseOS repo
yum_repository:
name: RHEL-BaseOS
description: RHEL-BaseOS
baseurl: ftp://192.168.45.21/rhel84/BaseOS
enabled: yes
gpgcheck: no
- name: Add RHEL AppStream repo
yum_repository:
name: RHEL-AppStream
description: RHEL-AppStream
baseurl: ftp://192.168.45.21/rhel84/AppStream
enabled: yes
gpgcheck: no
- name: Configure bond0 master
when: "kvmguests in group_names"
nmcli:
type: bond
conn_name: "{{ item.conn_name }}"
ifname: "{{ item.ifname }}"
ip4: "{{ item.ip4 }}"
state: present
mode: active-backup
with_items:
- "{{ nmcli_bond0 }}"
- name: Configure bond slaves
when: "kvmguests in group_names"
nmcli:
type: bond-slave
conn_name: "{{ item.conn_name }}"
ifname: "{{ item.ifname }}"
master: "{{ item.master }}"
state: present
with_items:
- "{{ nmcli_bond_slave }}"
- name: fix MTU
when: "kvmguests in group_names"
shell: "nmcli con mod {{ item.conn_name }} 802-3-ethernet.mtu 9000"
with_items:
- "{{ nmcli_bond_slave }}"
- "{{ nmcli_bond0 }}"
- name: Push /etc/hosts
copy:
src: /etc/hosts
dest: /etc/hosts
- name: Push script for starting vms
when: "hypervisors in group_names"
copy:
src: files/start-vms-{{ inventory_hostname_short }}
dest: /usr/local/sbin/start-vms
mode: 555
- name: Push service for starting VMs
when: "hypervisors in group_names"
copy:
src: files/start-vms.service
dest: /etc/systemd/system/start-vms.service
mode: 444
notify: Reload systemd
- name: enable start-vms service
when: "hypervisors in group_names"
ansible.builtin.systemd:
name: start-vms
enabled: yes
- name: Install packages on hypervisors
when: "hypervisors in group_names"
dnf:
name: "{{ hyperpackages }}"
state: present
- name: Install packages on KVM guests
when: "kvmguests in group_names"
dnf:
name: "{{ kvmguestpackages }}"
state: present
- name: enable libvirtd service
when: "hypervisors in group_names"
ansible.builtin.systemd:
name: libvirtd
state: started
enabled: yes
- name: stop and disable firewalld service
ansible.builtin.systemd:
name: firewalld
state: stopped
enabled: no
- name: disable selinux
when: "kvmguests in group_names"
selinux:
state: disabled
- name: keep 2 GB free memory available
when: "kvmguests in group_names"
sysctl:
name: vm.min_free_kbytes
value: "2097152"
- name: Configure remote syslogging
copy:
dest: /etc/rsyslog.d/remote.conf
content: |
# Ansible managed. Do not edit locally.
*.* @192.168.45.21
notify: Restart rsyslog
- name: Push /etc/profile.d/scale.sh to set PATH for Scale commands
when: "kvmguests in group_names"
copy:
src: files/scale-profile.sh
dest: /etc/profile.d/scale.sh
mode: 444
# FIXME: reload network config if MTU has changed
# FIXME: reboot to fully disable selinux (shame on me, why do we disable selinux??)
VM Installation
Each of the VMs is installed by first creating a logical volume for it:
# lvcreate --size 100G --name vm-essafm-1-1 rootvg
then creating and installing the VM with virt-install. Here we’re bridging eth0 out the br0 device
on the hypervisor, and assign PCIe access to the VFs for eth1 and eth2:
# virt-install --connect qemu:///system --name essafm-1-1 --ram 196608 \
--vcpus 32 --disk path=/dev/rootvg/vm-essafm-1-1 \
--network=bridge:br0,model=virtio --os-type=linux --os-variant=rhel8.4 \
--extra-args="console=ttyS0,115200n8 ip=192.168.45.138:::24:essafm-1-1:eth0:none net.ifnames=0 \
inst.repo=ftp://192.168.45.6/rhel84 inst.ks=ftp://192.168.45.6/kickstart-vm.cfg" \
--location=ftp://192.168.45.6/rhel84 --serial pty --graphics none --console pty,target.type=virtio \
--hostdev net_enp1s0f0v1_0a_0a_0a_de_ad_01 --hostdev net_enP51p1s0f0v1_2a_2a_2a_de_ad_01
Just make sure VM name, IP and MAC address are correct. Once the base install is done, we add it to ansible/hosts and ansible/host_vars/hostname and run our playbook to complete the configuration:
# ansible-playbook node-setup.yaml
Hypervisor startup
Once the hypervisors boot, we need to make sure to configure the SR-IOV VFs with correct MAC address and MTU, before starting the VMs. This is implemented through /root/ansible/files/files/start-vms-$hostname containing the commands for configuring VFs, MAC addresses, MTU and starting the VMs. Example /root/ansible/files/files/start-vms-esskvm-1-1 file:
#!/bin/bash
#
# Ansible managed
#
# First we enable the SR-IOV virtual functions:
echo 5 > /sys/class/net/enP51p1s0f0/device/sriov_numvfs
echo 5 > /sys/class/net/enp1s0f0/device/sriov_numvfs
# Then set the static mac addresses:
# --- enP51p1s0f0 ---
echo 0033:01:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
ip link set enP51p1s0f0 vf 0 mac 1a:1a:1a:de:ad:00
echo 0033:01:00.2 > /sys/bus/pci/drivers/mlx5_core/bind
echo 0033:01:00.3 > /sys/bus/pci/drivers/mlx5_core/unbind
ip link set enP51p1s0f0 vf 1 mac 1a:1a:1a:de:ad:01
echo 0033:01:00.3 > /sys/bus/pci/drivers/mlx5_core/bind
echo 0033:01:00.4 > /sys/bus/pci/drivers/mlx5_core/unbind
ip link set enP51p1s0f0 vf 2 mac 1a:1a:1a:de:ad:02
echo 0033:01:00.4 > /sys/bus/pci/drivers/mlx5_core/bind
echo 0033:01:00.5 > /sys/bus/pci/drivers/mlx5_core/unbind
ip link set enP51p1s0f0 vf 3 mac 1a:1a:1a:de:ad:03
echo 0033:01:00.5 > /sys/bus/pci/drivers/mlx5_core/bind
echo 0033:01:00.6 > /sys/bus/pci/drivers/mlx5_core/unbind
ip link set enP51p1s0f0 vf 4 mac 1a:1a:1a:de:ad:04
echo 0033:01:00.6 > /sys/bus/pci/drivers/mlx5_core/bind
ip link set enP51p1s0f0v0 mtu 9000
ip link set enP51p1s0f0v1 mtu 9000
ip link set enP51p1s0f0v2 mtu 9000
ip link set enP51p1s0f0v3 mtu 9000
ip link set enP51p1s0f0v4 mtu 9000
# --- enp1s0f0 ---
echo 0000:01:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
ip link set enp1s0f0 vf 0 mac 3a:3a:3a:de:ad:00
echo 0000:01:00.2 > /sys/bus/pci/drivers/mlx5_core/bind
echo 0000:01:00.3 > /sys/bus/pci/drivers/mlx5_core/unbind
ip link set enp1s0f0 vf 1 mac 3a:3a:3a:de:ad:01
echo 0000:01:00.3 > /sys/bus/pci/drivers/mlx5_core/bind
echo 0000:01:00.4 > /sys/bus/pci/drivers/mlx5_core/unbind
ip link set enp1s0f0 vf 2 mac 3a:3a:3a:de:ad:02
echo 0000:01:00.4 > /sys/bus/pci/drivers/mlx5_core/bind
echo 0000:01:00.5 > /sys/bus/pci/drivers/mlx5_core/unbind
ip link set enp1s0f0 vf 3 mac 3a:3a:3a:de:ad:03
echo 0000:01:00.5 > /sys/bus/pci/drivers/mlx5_core/bind
echo 0000:01:00.6 > /sys/bus/pci/drivers/mlx5_core/unbind
ip link set enp1s0f0 vf 4 mac 3a:3a:3a:de:ad:04
echo 0000:01:00.6 > /sys/bus/pci/drivers/mlx5_core/bind
ip link set enp1s0f0v0 mtu 9000
ip link set enp1s0f0v1 mtu 9000
ip link set enp1s0f0v2 mtu 9000
ip link set enp1s0f0v3 mtu 9000
ip link set enp1s0f0v4 mtu 9000
# And finally start our VMs
# virsh start
virsh start essafm-2-1
AFM gateway configuration
VMs for AFM gateways are installed through:
#### On esskvm-1-1:
#
# lvcreate --size 100G --name vm-essafm-1-1 rootvg
# virt-install --connect qemu:///system --name essafm-1-1 --ram 196608 \
--vcpus 32 --disk path=/dev/rootvg/vm-essafm-1-1 \
--network=bridge:br0,model=virtio --os-type=linux --os-variant=rhel8.4 \
--extra-args="console=ttyS0,115200n8 ip=192.168.45.138:::24:essafm-1-1:eth0:none net.ifnames=0 \
inst.repo=ftp://192.168.45.6/rhel84 inst.ks=ftp://192.168.45.6/kickstart-vm.cfg" \
--location=ftp://192.168.45.6/rhel84 --serial pty --graphics none --console pty,target.type=virtio \
--hostdev net_enp1s0f0v1_1a_1a_1a_ab_ba_00 --hostdev net_enP51p1s0f0v1_3a_3a_3a_ab_ba_00
#
#
##### On esskvm-1-2:
# lvcreate --size 100G --name vm-essafm-1-2 rootvg
# virt-install --connect qemu:///system --name essafm-1-2 --ram 196608 \
--vcpus 32 --disk path=/dev/rootvg/vm-essafm-1-2 \
--network=bridge:br0,model=virtio --os-type=linux --os-variant=rhel8.4 \
--extra-args="console=ttyS0,115200n8 ip=192.168.45.139:::24:essafm-1-2:eth0:none net.ifnames=0 \
inst.repo=ftp://192.168.45.6/rhel84 inst.ks=ftp://192.168.45.6/kickstart-vm.cfg" \
--location=ftp://192.168.45.6/rhel84 --serial pty --graphics none --console pty,target.type=virtio \
--hostdev net_enp1s0f0v1_0a_0a_0a_ab_ba_00 --hostdev net_enP51p1s0f0v1_2a_2a_2a_ab_ba_00
Protocol cluster installation
Each protocol cluster consist of 2 protocol nodes, and a quorum node on the second location. These are installed the same way as the AFM gateways:
#### On esskvm-1-1:
#
# lvcreate --size 100G --name vm-ces-1-1-hs rootvg
# virt-install --connect qemu:///system --name ces-1-1 --ram 196608 \
--vcpus 32 --disk path=/dev/rootvg/vm-ces-1-1 \
--network=bridge:br0,model=virtio --os-type=linux --os-variant=rhel8.4 \
--extra-args="console=ttyS0,115200n8 ip=192.168.45.13:::24:ces-1-1:eth0:none net.ifnames=0 \
inst.repo=ftp://192.168.45.6/rhel84 inst.ks=ftp://192.168.45.6/kickstart-vm.cfg" \
--location=ftp://192.168.45.6/rhel84 --serial pty --graphics none --console pty,target.type=virtio \
--hostdev net_enp1s0f0v1_1a_1a_1a_ab_ba_01 --hostdev net_enP51p1s0f0v1_3a_3a_3a_ab_ba_01
#
#
##### On esskvm-1-2:
# lvcreate --size 100G --name vm-ces-1-2 rootvg
# virt-install --connect qemu:///system --name ces-1-2 --ram 196608 \
--vcpus 32 --disk path=/dev/rootvg/vm-ces-1-2 \
--network=bridge:br0,model=virtio --os-type=linux --os-variant=rhel8.4 \
--extra-args="console=ttyS0,115200n8 ip=192.168.45.14:::24:ces-1-2:eth0:none net.ifnames=0 \
inst.repo=ftp://192.168.45.6/rhel84 inst.ks=ftp://192.168.45.6/kickstart-vm.cfg" \
--location=ftp://192.168.45.6/rhel84 --serial pty --graphics none --console pty,target.type=virtio \
--hostdev net_enp1s0f0v1_0a_0a_0a_ab_ba_01 --hostdev net_enP51p1s0f0v1_2a_2a_2a_ab_ba_01
#
#
##### On esskvm-2-1
# lvcreate --size 100G --name vm-ces-1-quorum-2-1 rootvg
# virt-install --connect qemu:///system --name ces-1-quorum-2-1 --ram 16384 \
--vcpus 32 --disk path=/dev/rootvg/vm-ces-1-quorum-2-1 \
--network=bridge:br0,model=virtio --os-type=linux --os-variant=rhel8.4 \
--extra-args="console=ttyS0,115200n8 ip=192.168.45.139:::24:ces-1-quorum-2-1:eth0:none net.ifnames=0 \
inst.repo=ftp://192.168.45.6/rhel84 inst.ks=ftp://192.168.45.6/kickstart-vm.cfg" \
--location=ftp://192.168.45.6/rhel84 --serial pty --graphics none --console pty,target.type=virtio \
--hostdev net_enp1s0f0v1_1a_1a_1a_de_ad_01 --hostdev net_enP51p1s0f0v1_3a_3a_3a_de_ad_01
Once we have installed 2 protocol VMs on the primary site (esskvm-1-1 and esskvm-1-2), and a minimal quorum node on the secondary site (esskvm-2-1 or esskvm-2-2), we’re ready to configure the protocol clusters. This is done through first connecting to the first protocol node and running:
# sh Spectrum_Scale_Data_Management-5.1.3.0-ppc64LE-Linux-install
# cd /usr/lpp/mmfs/5.1.3.0/ansible-installer
# ./spectrumscale setup -st ess
# ./spectrumscale node add ces-1-1-hs -p -q -m
# ./spectrumscale node add ces-1-2-hs -p -q -m
# ./spectrumscale node add ces-1-quorum-2-1-hs -q
etc...