For applications / workloads to be accessible outside of a Kubernetes cluster an object like the type Ingress needs to be created that either connects to or creates a Load Balancer that can route Layer 4 (L4) and/or Layer 7 (L7) network traffic. The NSX-ALB can be leveraged to provide either/both L4 and L7 routing.
This test case focuses on how to setup and use NSX-ALB (AVI) and the Avi Kubernetes Operator (AKO) to create an Ingress and Layer 7 only Load Balancer for vSphere with Tanzu enabled and running on top NSX-T, which provides Layer 4.
Note: The test procedure steps below are based on the following Tanzu NSX-ALB Blog
-
Using the vSphere Administrator console and credentials, login to the VC Web UI and navigate to Menu > Workload Management.
-
Select the Namespaces tab then the ${SC_NAMESPACE_01} hyperlink.
-
Ensure
tkc01-small
is present in the ${SC_NAMESPACE_01} and reports healthy. If not deploy/redploy thetkc01-small
cluster from the SC02-TC01 test case. -
Using the DevOps Engineer console and credentials, login to the
tkc01-small
Tanzu Kubernetes Clusterkubectl vsphere login --vsphere-username ${DEVOPS_USER_NAME} --server=https://${SC_API_VIP} --insecure-skip-tls-verify --tanzu-kubernetes-cluster-namespace ${SC_NAMESPACE_01} --tanzu-kubernetes-cluster-name tkc01-small</pre>
Expected:
Logged in successfully.
You have access to the following contexts:
${SC_API_VIP}
${SC_NAMESPACE_01} -
Confirm you are logged into the tkc01-small Tanzu Kubernetes Cluster context
kubectl config current-context
Expected:
tkc01-small
-
Steps 6-15 will be performed from the AVI Controller Management UI. First we need to create an IPAM and DNS Profile to automate creating DNS entries and IP address assignments when an Ingress is created. To create a DNS Profile Templates >> Profiles >> IPAM/DNS Profiles >> Create DNS Profile. To create the IPAM Profile Templates > Profiles > IPAM/DNS Profiles > Create IPAM Profile.
-
Add IPAM and DNS Profiles to Default Cloud, which is a vCenter Cloud. Select the profiles created above to the appropriate drop downs at Infrastructure > Clouds > Default Cloud.
-
While on Default Cloud configuration enable three things, DHCP, Prefer Static Routes vs Directly Connected Network, and Use Static Routes for Network Resolution of VIP.
-
In Default Cloud configuration to the Network tab and view/select the Management Network dropdown to the manage VDS dPG, or management NSX-T Overlay.
-
Each Tanzu Kubernetes Cluster will require its own SE Group. Navigate to Infrastructure > Service Engine Group > CREATE, supply a Service Engine name and leave the rest to defaults.
-
Cofigure VIP Network and default routes by going to Infrastructure > Networks add an IP Pool for the Data Network which will be used for the VIPs.
-
Configure default route by going to Infrastructure > Routing and create/configure the default gateway.
-
(Optional) You can manually configure the Service Engines (SE) - see
Tanzu NSX-ALB Blog
link above but for this test case we will allow the automatic creating of the SEs. -
To setup DNS delegation first create a DNS Virtual Service (DNS-VS) Application > Virtual Service > CREATE VIRTUAL SERVICE
-
Enable the DNS-VS by going to Administration > Settings > DNS > Service and selecting the DNS-VS created in the step above.
-
On the main DNS Server add a subdomain entry to point to the NSX-ALB DNS-VS for DNS delegation to work.
-
Going back to the DevOps Engineer console create the
avi-system
namespace for the Avi Kubernetes Operator (AKO) installation.kubectl create ns avi-system
Expected:
namespace/avi-system created
-
Add AKO repo bits to helm (TODO[fcarta] - update with tanzu AKO link?)
helm repo add ako https://avinetworks.github.io/avi-helm-charts/charts/stable/ako
Expected:
-
Copy down the AKO values file and edit values. (TODO[fcarta] - add details here or provide values.yaml file in lab)
curl -JOL https://raw.githubusercontent.com/avinetworks/avi-helm-charts/master/charts/stable/ako/values.yaml
Expected:
-
Deploy AKO using helm
helm install ako/ako --generate-name --version 1.2.1 -f values.yaml --namespace=avi-system
Expected:
-
Poll AKO pod status and wait for it to report a
Running
state.kubectl get pods -n avi-system -w
Expected:
NAME READY STATUS RESTARTS AGE ako-0 1/1 Running 0 1h
-
Verify in AVI Controller Management UI that that AKO create static routes under Infrastructure Routing.
-
Next steps are to validate automatic Load Balancer creation by deploying a sample application. Create an application namespace
yelb
.kubectl create ns yelb
Expected:
-
Deploy the
yelb
test application.kubectl apply -f https://raw.githubusercontent.com/aidrees/yelb/main/yelb-no-lb.yaml -n yelb
Expected:
-
Apply the
yelb
Ingress by downloading the yaml file, editing the host field with a DNS delegated name and save, and then applying that edited yaml file.curl -JOL https://raw.githubusercontent.com/aidrees/yelb/main/yelb-ingress.yaml
kubectl apply -f yelb-ingress.yaml -n yelb
Expected:
-
View the newly created Ingress
kubectl get ingress -n yelb
Expected:
NAME HOSTS ADDRESS PORTS AGE yelb-ingress yelb.ali-avi.vmwdxb.com 192.168.28.101 80 14m
-
TODO[fcarta] - complete validation steps and add example with Service of Type LoadBalancer
- Configuring vSphere Networking and NSX Advanced Load Balancer for vSphere with Tanzu
- NSX Advanced Load Balancer Components
- Install VMware NSX Advanced Load Balancer on a vSphere Distributed Switch
- [ ] Pass
- [ ] Fail
Return to Test Cases Inventory