Table of Contents
- Overview
- Network
- Setup Hypervisors
- Hypervisor Network-Impairments
- Create/Delete/Replace VMs
- Manifests
BM/RWN cluster types will allocate remaining hardware that was not put in the inventory for the cluster as Hypervisor machines if hv_inventory: true
is set in the all.yml
vars file while running the create-inventory.yml
playbook. This is typically used for testing ACM/MCE installed on a hub cluster such that the VMs will serve as host machines for spoke clusters.
Make sure to set and append the following vars in the "extra vars" section of the vars/all.yml
hv_inventory
- Enables placing remaining cloud hardware into hypervisor host group in inventory filehv_ssh_pass
- The ssh password to the hypervisor machineshv_ip_offset
- Offsets hypervisor ip addresses to allow for future expansion of the "hub" cluster. For example, a setting of10
allows the hub cluster to grow 10 nodes before the ip addresses will conflict with the hypervisors.hv_vm_prefix
- Set to a specific prefix. Defaults tosno
which produces vms with hostnames sno00001, sno00002, ... snoXXXXXhypervisor_nic_interface_idx
- Defaults to1
and corresponds to Network 1 in the scalelab. The index is used to lookup which nic name will be bridged for the VMs.
The default VM resource configuration is:
- 8 vCPUs
- 18Gi Memory
- 120G Disk
The number of vms per hypervisor type depends largely on if there are extra disks with the default resource configuration. The count of vms per machine type is hard-coded in the vars/lab.yml
file. If you want to customize the count adjust the value per the machine type as desired in the vars/lab.yml
.
The hypervisors bridge a network interface that was determined at the create-inventory.yml
playbook timeframe. Review your inventory before running hv-setup.yml
to ensure the interface you intended was selected.
After generating an inventory with the create-inventory.yml
playbook, the hypervisor can be setup. Start by editing the vars
cp ansible/vars/hv.sample.yml ansible/vars/hv.yml
vi ansible/vars/hv.yml
Pay close attention to these vars:
lab
- Likelyscalelab
as that is the only lab this has been setup and tested in.setup_hv_vm_dhcp
- Set to true if dnsmasq should be configured on each hypervisor to hand out static addresses to each vmbase_dns_name
- If you set this for your hub cluster, then set it identically herecontrolplane_network
- If you adjusted this for the hub cluster, make sure it matches for the hypervisors
Run hv-setup playbook
ansible-playbook -i ansible/inventory/cloud42.local ansible/hv-setup.yml
For testing where network impairments are required, we can apply latency/packet-loss/bandwidth impairments on the hypervisor nodes. The create-inventory.yml
playbook automatically selects scale lab "network 1" nic names for the host var nic
in the hypervisor inventory. To change this, adjust hypervisor_nic_interface_idx
as an extra var applied to the all.yml
vars file.
To apply network impairments, first copy the network-impairments sample vars file
cp ansible/vars/network-impairments.sample.yml ansible/vars/network-impairments.yml
vi ansible/vars/network-impairments.yml
Make sure to set/review the following vars:
install_tc
- toggles installing traffic controlapply_egress_impairments
andapply_ingress_impairments
- toggles out-going or incoming traffic impairmentsegress_delay
andingress_delay
- latency for egress/ingress in millisecondsegress_packet_loss
andingress_packet_loss
- packet loss in percent (Example0.01
for 0.01%)egress_bandwidth
andingress_bandwidth
- bandwidth in kilobits (Example100000
which is 100000kbps or 100Mbps)
Apply impairments:
ansible-playbook -i ansible/inventory/cloud03.local ansible/hv-network-impairments.yml
Remove impairments:
ansible-playbook -i ansible/inventory/cloud03.local ansible/hv-network-impairments.yml -e 'apply_egress_impairments=false apply_ingress_impairments=false'
Note, egress impairments are applied directly to the impaired nic. Ingress impairments are applied to an ifb interface that handles ingress traffic for the impaired nic.
Three playbooks are included to create, delete and replace the vms. All three playbooks depend on the same vars file and it should be copied in the same fashion as previous vars files:
cp ansible/vars/hv.sample.yml ansible/vars/hv.yml
vi ansible/vars/hv.yml
The following vars apply to the manifests which are generated for deploying OCP clusters from ACM/MCE using the VMs as "emulated BareMetal Nodes":
ssh_public_key_file
- Sets the permitted ssh key to ssh into the nodesetup_hv_vm_dhcp
- Leaves the nmstateconfig portion out of the manifestshv_vm_manifest_type
- Determines which kind of manifest(s) the playbook will generate, choose fromsno
,jumbo
, andmulticluster
hv_vm_manifest_acm_cr
- Set to true if ACM CRs are desired to be generated with the manifestsmulticluster_count
- Ifhv_vm_manifest_type: multicluster
, then this determines the number of cluster manifests to generate. It will includemulticluster_node_count
count of vms in each cluster manifest. Be careful not to exceed the entire count of vms
Run create vms:
ansible-playbook -i ansible/inventory/cloud42.local ansible/hv-vm-create.yml
Run replace vms (Deletes then creates vms):
ansible-playbook -i ansible/inventory/cloud42.local ansible/hv-vm-replace.yml
Run delete vms:
ansible-playbook -i ansible/inventory/cloud42.local ansible/hv-vm-delete.yml
When you create vms, depending upon what hv_vm_manifest_type
, you will find pre-generated manifests to either deploy SNOs or traditional OCP clusters using ACM/MCE. Those manifests are located in:
# ls -lh /root/hv-vm/
total 0
drwxr-xr-x. 3 root root 23 Jul 22 18:34 jumbo
(.ansible) [root@f31-h05-000-r640 jetlag]# ls -lh /root/hv-vm/jumbo/manifests/
total 456K
-rw-r--r--. 1 root root 453K Jul 22 18:51 manifest.yml
As expected, cluster type of jumbo
includes just one yaml file with all the manifests to create the jumbo cluster.
(.ansible) [root@f31-h05-000-r640 jetlag]# ls -lh /root/hv-vm/multicluster/manifests/
total 400K
-rw-r--r--. 1 root root 97K Jul 22 19:30 multicluster-00001.yml
-rw-r--r--. 1 root root 97K Jul 22 19:30 multicluster-00002.yml
-rw-r--r--. 1 root root 97K Jul 22 19:30 multicluster-00003.yml
-rw-r--r--. 1 root root 97K Jul 22 19:30 multicluster-00004.yml
Multicluster manifests is a manifest per cluster consisting of multicluster_node_count
number of nodes.
SNO manifests are a directory per manifest since each SNO is a single node with several CRs.