sidebar | permalink | keywords | summary |
---|---|---|---|
sidebar |
kubernetes-reqs-gke.html |
kubernetes, k8s, discover kubernetes cluster, discover k8s, google, google cloud, gke, kubernetes support |
You can add and manage managed Google Kubernetes Engine clusters (GKE) and self-managed Kubernetes clusters in Google using Cloud Manager. Before you can add the clusters to Cloud Manager, ensure the following requirements are met. |
You can add and manage managed Google Kubernetes Engine (GKE) clusters and self-managed Kubernetes clusters in Google using Cloud Manager. Before you can add the clusters to Cloud Manager, ensure the following requirements are met.
This topic uses Kubernetes cluster where configuration is the same for GKE and self-managed Kubernetes clusters. The cluster type is specified where configuration differs.
- Astra Trident
-
The Kubernetes cluster must have NetApp Astra Trident deployed. Install one of the four most recent versions of Astra Trident using Helm. Go to the Astra Trident docs for installation steps using Helm.
- Cloud Volumes ONTAP
-
Cloud Volumes ONTAP must be in Cloud Manager under the same tenancy account, workspace, and Connector as the Kubernetes cluster. Go to the Astra Trident docs for configuration steps.
- Cloud Manager Connector
-
A Connector must be running in Google with the required permissions. Learn more below.
- Network connectivity
-
Network connectivity is required between the Kubernetes cluster and the Connector and between the Kubernetes cluster and Cloud Volumes ONTAP. Learn more below.
- RBAC authorization
-
Cloud Manager supports RBAC-enabled clusters with and without Active Directory. The Cloud Manager Connector role must be authorized on each GKE cluster. Learn more below.
A Cloud Manager Connector in Google is required to discover and manage Kubernetes clusters. You’ll need to create a new Connector or use an existing Connector that has the required permissions.
Follow the steps in one of the links below.
If you want to discover a managed GKE cluster, you might need to modify the custom role for the Connector to provide the permissions.
-
In Cloud Console, go to the Roles page.
-
Using the drop-down list at the top of the page, select the project or organization that contains the role that you want to edit.
-
Click a custom role.
-
Click Edit Role to update the role’s permissions.
-
Click Add Permissions to add the following new permissions to the role.
container.clusters.get container.clusters.list
-
Click Update to save the edited role.
You need to provide network connectivity between the Kubernetes cluster and the Connector and between the Kubernetes cluster and the Cloud Volumes ONTAP system that provides backend storage to the cluster.
-
Each Kubernetes cluster must have an inbound connection from the Connector
-
The Connector must have an outbound connection to each Kubernetes cluster over port 443
The simplest way to provide this connectivity is to deploy the Connector and Cloud Volumes ONTAP in the same VPC as the Kubernetes cluster. Otherwise, you need to set up a peering connection between the different VPC.
Here’s an example that shows each component in the same VPC.
RBAC validation occurs only on Kubernetes clusters with Active Directory (AD) enabled. Kubernetes clusters without AD will pass validation automatically.
You need authorize the Connector role on each Kubernetes cluster so the Connector can discover and manage a cluster.
To configure subjects: name:
in the YAML file, you need to know the Cloud Manager Unique ID.
You can find the unique ID one of two ways:
-
Using the command:
gcloud iam service-accounts list gcloud iam service-accounts describe <service-account-email>
-
In the Service Account Details on the Cloud Console.
-
Create a cluster role and role binding.
-
Create a YAML file that includes the following text. Replace the
subjects: kind:
variable with your username andsubjects: user:
with the unique ID for the authorized service account.apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cloudmanager-access-clusterrole rules: - apiGroups: - '' resources: - secrets - namespaces - persistentvolumeclaims - persistentvolumes verbs: - get - list - create - apiGroups: - storage.k8s.io resources: - storageclasses verbs: - get - list - apiGroups: - trident.netapp.io resources: - tridentbackends - tridentorchestrators verbs: - get - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: k8s-access-binding subjects: - kind: User name: "uniqueID" apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: cloudmanager-access-clusterrole apiGroup: rbac.authorization.k8s.io
-
Apply the configuration to a cluster.
kubectl apply -f <file-name>
-