You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How to find the correct helm chart (We do have a page containing this, but it is not linked here)
Where to get the default values.yaml
For certain field names in docs for “Universal in Kubernetes”, there should be a prefix “kuma”. When set chart settings. (Applies for Kong Mesh)
We don’t have a namespace specified for installing the zone control plane “helm install --create-namespace”, this will install Helm release into the user's “current” namespace, which may not be the one expected.
Name of the service “global-remote-sync” described in the document should be “kong-mesh-global-zone-sync”. On the doc, it is currently “global-remote-sync”.
Hint the user to add control plane into their local kumactl configuration
Right now, we instructed the user to verify their connectivity to the control plane after an installation. When their control plane is running a Kubernetes cluster, they will fail if they do not add proper configuration into their kumactl command prior to executing the command.
$ kumactl get zones
WARNING: Unable to confirm the server supports this kumactl version
Error: failed to list Zone: Get "http://localhost:5681/zones": dial tcp [::1]:5681: connect: connection refused
We'd better hint the user to add a the global control plane to their local configuration before they execute the command:
Highlight failure modes on control planes offline MAY cause data plane failures
When the control plane goes offline,
New data plane proxies that can’t join the mesh actually do include those new instances (Pod/VM) that are created by the cluster or a relevant automatic deployment mechanism (e.g. new pods created by a rolling update process), this means a control plane failure will block application update or an scaling out event
Existing data plane instance will fail when their mTLS client certificate expires (defaults to 24h, could be changed depend on mesh mTLS configuration)
What happened?
Several improvements required for multi-zone installation by helm
I was using “Universal on Kubernetes using Helm” mode. On this page, https://kuma.io/docs/2.4.x/production/cp-deployment/multi-zone/ In the first section (“Setup the global control plane”), current document version did not explain the following:
values.yaml
Found on pages:
Hint the user to add control plane into their local kumactl configuration
Right now, we instructed the user to verify their connectivity to the control plane after an installation. When their control plane is running a Kubernetes cluster, they will fail if they do not add proper configuration into their kumactl command prior to executing the command.
$ kumactl get zones WARNING: Unable to confirm the server supports this kumactl version Error: failed to list Zone: Get "http://localhost:5681/zones": dial tcp [::1]:5681: connect: connection refused
We'd better hint the user to add a the global control plane to their local configuration before they execute the command:
Found on pages:
Highlight failure modes on control planes offline MAY cause data plane failures
When the control plane goes offline,
New data plane proxies that can’t join the mesh actually do include those new instances (Pod/VM) that are created by the cluster or a relevant automatic deployment mechanism (e.g. new pods created by a rolling update process), this means a control plane failure will block application update or an scaling out event
Existing data plane instance will fail when their mTLS client certificate expires (defaults to 24h, could be changed depend on mesh mTLS configuration)
Found on pages:
The text was updated successfully, but these errors were encountered: