You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During chaos testing run, which simulated pod failures (kubelet) per AZ on all nodes belonging to the specific AZ, it was pointed out that critical istio workloads were all traniently down, either in pending state or in init state. The reason is that to all replicas were being scheduled in nodes belonging to the same AZ, and kept being do so during pod termination/eviction.
For both istio-ingressgateway and istiod running on enterprise plan runtimes (having at least 3 nodes in 2 AZs), ensure that the pods are configured with either topology spread constraints, or pod anti affinity rules to preferably
spread the replicas on different nodes, and furthermore
spread the replicas on nodes belonging to different availability zones.
Reasons
Higher resiliency and availiability during node level and zones level failure scenarios.
DoD:
Provide unit and integration tests.
Provide documentation.
Verify if the solution works for both open-source Kyma and SAP BTP, Kyma runtime.
If you changed the resource limits, explain why it was needed.
If the default configuration of Istio Operator has been changed, you performed a manual upgrade test to verify that the change can be rolled out correctly.
Verify that your contributions don't decrease code coverage. If they do, explain why this is the case.
Add release notes.
Attachments
The text was updated successfully, but these errors were encountered:
Description
During chaos testing run, which simulated pod failures (kubelet) per AZ on all nodes belonging to the specific AZ, it was pointed out that critical istio workloads were all traniently down, either in pending state or in init state. The reason is that to all replicas were being scheduled in nodes belonging to the same AZ, and kept being do so during pod termination/eviction.
For both istio-ingressgateway and istiod running on enterprise plan runtimes (having at least 3 nodes in 2 AZs), ensure that the pods are configured with either topology spread constraints, or pod anti affinity rules to preferably
Reasons
Higher resiliency and availiability during node level and zones level failure scenarios.
DoD:
Attachments
The text was updated successfully, but these errors were encountered: