-
Notifications
You must be signed in to change notification settings - Fork 172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Infinite fast loop on sidecar injector #415
Comments
Hi @dguihal, that is a bit odd; that log message seems to indicate that the MutatingWebhookConfiguration is being updated by something repeatedly. I wonder if flux could be trying to reconcile it with its initial state, since the webhook config's |
Hi @tvoran thanks for having taking time to answer, it indeed could have been an idea, but I think it's something else. The MutatingWebhookConfiguration comes as part of the (hashicorp-vault) chart and not as a raw flux resource. So it isn't synced directly by flux. Only a redeployment of the helmrelease would trigger this (which doesn't occur). Also we deploy this helmrelease on the same way on EKS clusters, with far less log spams (I happens from time to time but for short time frames) Unfortunatly I'm not very familiar with k8s golang APIs vault-k8s/subcommand/injector/command.go Line 369 in 6ec9741
vault-k8s/subcommand/injector/command.go Line 454 in 6ec9741
I was thinking about some non acklowdeged signal or similar. But seems it doesn't exists from the code I can see |
Right, the log message is coming from the part of the code that watches the mutating webhook config for changes, and then checks to make sure the caBundle is still set correctly. It does this to ensure that something besides vault-k8s doesn't clobber the webhook config. Is the injector working when it's logging repeatedly like that in AKS? You might be able to see what's changing by running |
Still investigating on my side. And I saw that "something" is adding a namespace selector on the mutatingwebhookconfiguration on AKS clusters (and not on EKS ones)
Could it be the source of those logs ? |
Yep, that would cause those messages. It sounds like something in the Azure side is making those updates: Azure/AKS#1771 And they discuss it a bit here: https://learn.microsoft.com/en-us/azure/aks/faq#can-i-use-admission-controller-webhooks-on-aks So the easiest way to reduce the log noise would be to add that selector to the chart deployment: webhook:
failurePolicy: Fail
objectSelector:
matchLabels:
vault-sidecar-injector: "true"
namespaceSelector:
matchExpressions:
- key: control-plane
operator: DoesNotExist That should reduce the spurious updates from both sides. |
@travisforte unfortunatly not .... |
I have the same problem when I have 2 pods of vault injector running to provide HA |
@pbromb My guess is this is because each pod in the deployment is updating the CA bundle and triggering the other pods. I am still working out if I can solve it in Helm I solved it, had to add
|
@rcousens Thought that the |
I am seeing the same issue on my AKS cluster
Im using injector image:
Chart.yaml
kubectl logs -n vault vault-agent-injector-86c8f7dccf-9lmw9 | head -n 15
I do see the logs
|
What is the solution on this issue? We are facing same issue on GKE.
This is already enabled
|
Don't forget to increase the replicas: injector:
# Any number greater than 1 is required to enable leader election.
replicas: 2
leaderElector:
enabled: true https://github.com/hashicorp/vault-helm/blob/v0.28.1/templates/injector-deployment.yaml#L96 |
Describe the bug
Here is a sample take from vault sidecar injector deployed on Azure AKS cluster
Looks like there is some fast loop event which doesn't end.
Strangely same deployment on AWS EKS cluster doesn't show the same behavior
To Reproduce
Seen with 0.17 and 1.0.1 version of docker image
Seen with 0.22 and 0.23 version of helm chart
Application deployment:
Other useful info to include:
kubectl describe deployment <app>
andkubectl describe replicaset <app>
output.Expected behavior
Compared to same deployment on EKS clusters, I expected a far less output flow for this output
Environment
Additional context
The text was updated successfully, but these errors were encountered: