Do you want to start your beginer to pro journey for Anthos Bare Metal? This tutorial guides you through the process to setup Anthos Bare Metal 1.8.2 on a single GCE instance. No prior understanding of Anthos or Anthos Bare Metal is required. You don't need to be a networking or Kubernetes geek to get started. All you need is a single GCE machine with As long as you have access to Google Cloud you will be able to get up and running.
gcloud compute disks create abmvirt-disk --image-project=ubuntu-os-cloud --image-family=ubuntu-2004-lts --zone=us-west1-a --size=200G --type=pd-ssd
gcloud compute images create abmvirt-image --source-disk abmvirt-disk --source-disk-zone us-west1-a --licenses "https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"
gcloud compute instances create abm-on-gce --zone=us-west1-a --image abmvirt-image --can-ip-forward --network default --tags http-server,https-server --min-cpu-platform "Intel Haswell" --scopes cloud-platform --machine-type n1-standard-32
Login to GCE
gcloud compute ssh abm-on-gce
If you wish to run legacy windows apps on the platform you will require KubeVirt which uses KVM to manage VMs on top of Kubernetes. Ensure you have ability to run KVM. Following command checks if the CPU supports harware virtualization. If following command output it 0 it means the CPU doesn't support hardware virtualization. Which means you can run KubeVirt VMs on the ABM.
grep -Eoc '(vmx|svm)' /proc/cpuinfo
Check if your system can run hardware-accelerated KVM virtual machines.
sudo apt-get update && sudo apt-get upgrade
sudo apt install cpu-checker
kvm-ok
Validate the processor virtualization capability is not disabled in the BIOS. The output should be like following.
INFO: /dev/kvm exists
KVM acceleration can be used
Setup your env, take a note of your IP address.
sudo apt install jq
gcloud auth application-default login
chmod +x setenv.sh
. ./setenv.sh
sudo bash
apt-get -qq update > /dev/null
apt-get -qq install -y jq > /dev/null
set -x
ip link add vxlan0 type vxlan id 42 dev ens4 dstport 0
export CURRENT_IP=$(ip --json a show dev ens4 | jq '.[0].addr_info[0].local' -r)
echo "VM IP address is: $CURRENT_IP"
bridge fdb append to 00:00:00:00:00:00 dst $CURRENT_IP dev vxlan0
ip addr add 10.200.0.2/24 dev vxlan0 # Remember the IP 10.200.0.2, this is VXLAN ip of this machine
ip link set up dev vxlan0
systemctl stop apparmor.service
systemctl disable apparmor.service
Verify you got new VXLAN setup on this machine.
sudo apt install net-tools
ifconfig # search for VXLAN IP
ping 10.200.0.2
Generate SSH keys for the user you are logged in with and give yourself a passwordless ssh access to the machine.
ssh-keygen # don't enter any passphrase, use default file paths
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# verify you can login without password
ssh [your-username]@[ip address of your machine] # you can find it from setenv.sh script
Now, generate SSH keys for root.
sudo bash
ssh-keygen # don't enter any passphrase, use default file paths
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# verify you can login without password
ssh root@[ip address of your machine] # you can find it from setenv.sh script
sudo apt-get install -y docker
sudo apt-get install -y docker.io
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker
docker run hello-world
gsutil cp gs://anthos-baremetal-release/bmctl/1.8.2/linux-amd64/bmctl bmctl
chmod a+x bmctl
./bmctl create config -c [cluster-name] \
--enable-apis --create-service-accounts --project-id=$PROJECT_ID
cp single-gce-abm-with-vxlan.yaml bmctl-workspace/[cluster-name]/[cluster-name].yaml
MAKE SURE YOU EDIT bmctl-workspace/[cluster-name]/[cluster-name].yaml FILE TO REPLACE THE CLUSTER NAME
./bmctl create cluster -c gce4-abm-cluster
chmod +x login-token.sh
./login-token.sh
Navigate to Anthos console, click the name of your server, on the right hand panel click Login, select the Bearer token option. Use the token above here and login to the cluster.
** Congratulations! ** You got a single machine Anthos cluster running on the simplest cloud. You can give yourself pat on your back. You have done it! All the next steps are options. You got the Kubernetes ready for containerized applicatoins.
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.1.2/deploy/longhorn.yaml
kubectl get pods --namespace longhorn-system --watch
kubectl -n longhorn-system get pod
Verify Longhorn is listed as a storageclass and make it a default storageclass
kubectl get sc
kubectl patch storageclass longhorn -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
kubectl get sc
USER=<USERNAME_HERE>; PASSWORD=<PASSWORD_HERE>; echo "${USER}:$(openssl passwd -stdin -apr1 <<< ${PASSWORD})" >> auth
kubectl -n longhorn-system create secret generic basic-auth --from-file=auth
kubectl get svc -n longhorn-system
As you can see the longhorn-frontend service is not exposed outside the cluster. You can use service type LoadBalancer. Since we are using VXLAN the ABM will allocate external IP from the range we provided in the cluster configuration file which is a range of VXLAN IPs.
kubectl edit svc longhorn-frontend -n longhorn-system # Change spec.type from ClusterIP to LoadBalancer
kubectl get svc -n longhorn-system
Download the Windows 2010 image from here. Upload the image to a GCS bucket so that you can download it on any machine later.
gsutil cp [/path/to]/Win10_21H1_English_x64.iso gs://[your gcs bucket]
And then download the ISO image on the machine you want to run the kubectl.
gsutil cp gs://[your gcs bucket]/Win10_21H1_English_x64.iso .
kubectl get svc -n cdi
Take the IP of cdi-uploadproxy. Upload the ISO image.
kubectl get sc
kubectl virt image-upload \
--image-path=/absolute_path_to_windows_iso//Win10_20H2_v2_English_x64.iso \
--pvc-name=windows-iso-pvc \
--access-mode=ReadWriteOnce \
--pvc-size=10G \
--uploadproxy-url=https://[cdi-uploadproxy svc IP]:443 \
--insecure \
--wait-secs=240 \
--storage-class=longhorn
kubectl get pvc
cd kubevirt
kubectl create -f windows-pvc.yaml
kubectl create -f windows-vm.yaml
Verify the PVC and VM are created
kubectl get pvc
kubectl get vm
kubectl get vmi
You need a UI console to kickstart the Windows installation. For that we are going to setup VNC on the machine. You will be asked to set the password. Please note this password.
cd vnc
./setup-vnc.sh
Wait for all packages to be installed and VNC server to start. Verify that the VNC server has started.
ps aux | grep vnc
Now copy script in xstartup.sh to $HOME/.vnc/xstartup. And restart the VNC server.
vncserver -kill :1
vncserver -geometry 1920x1080
Ensure you have the firewall on GCP to allow access to port 5901.
gcloud compute firewall-rules create vncserver --allow tcp:5901 --source-ranges 0.0.0.0/0
Launch your VNC client. You can download Real VNC for your desktop. Access the GCE instance using its [public IP]:5901. Access the VM you started earlier.
kubectl get vm
kubectl virt vnc [your-vm]
You will need to install the Windows operating system.