-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes Sidecar Usecase #115
Comments
Maybe for a separate issue... To support the new k8s sidecar support in 1.28+, also a health check would be needed. |
for 2, we'd still have to share the pid namespace though. I don't think I understand that usecase |
@nstott I ported the mysql bits to work with the bitnami mysql helm chart here: The problem is, in order to trigger a reload of the certs in mysql, you need a viable mysql client. So you need spire-helper available in the mysql container. I did mange to do some twisty bits with init containers to piece it together at runtime. But its kind of ugly. Instead, if you had shared pids enabled and pid signaling, the sidecar could just contain spiffe-helper and send a signal over to a container sidecar that has the mysql client ready to go to adapt the pid signal to a mysql reload certs sql command. I've prototyped that using the pid pr and a prototype CSI driver for spiffe-helper to make it possible to do this kind of thing: That example actually works and signals nginx to properly reload when the certs get updated. |
A usecase I'm now looking at is if you have |
I thought a restartPolicy was ignored when its in a terminating state? But, if thats not the case, maybe a preStop hook would help |
I think I mis-stated the scenario. The What we'd want in this scenario is after the main container exits successfully, the sidecar knows it needs to exit too. https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase |
@faisal-memon I think there is one more use case to handle. In the new sidecar mode of k8s, the container is run as an initContainer, and blocks startup until a readiness probe passes. There is currently though, no way to do that with the spiffe-helper/container image. there isn't an exec command or http endpoint to check that the requested x509/jwt files have been created so that pod startup an continue. |
There's no probe, but when run in the non-daemon mode, it won't exit until the file is written. And K8s will wait for the init containers to exit. So what we've been doing is have one init container in non-daemon mode, and one container (or an init container with restartPolicy of always, aka a native sidecar) that is in daemon mode. |
Right. So its supporting non native k8s sidecar mode today. So there is a workaround. But no solution for native k8s sidecar mode yet. |
I don't follow. I think we support both options today. apiVersion: v1
kind: Pod
metadata:
name: native-sidecar
spec:
initContainers:
- name: spiffe-helper-init
image: spiffe-helper
command: ["-daemon-mode", "false"]
volumeMounts:
- name: svids
mountPath: /var/run/svids
- name: spiffe-helper-sidecar
image: spiffe-helper
command: ["-daemon-mode", "true"]
restartPolicy: Always
volumeMounts:
- name: svids
mountPath: /var/run/svids
containers:
- name: app
image: tomcat
volumeMounts:
- name: svids
mountPath: /var/run/svids
volumes:
- name: svids
emptyDir:
medium: Memory
---
apiVersion: v1
kind: Pod
metadata:
name: non-native-sidecar
spec:
initContainers:
- name: spiffe-helper-init
image: spiffe-helper
command: ["-daemon-mode", "false"]
containers:
- name: app
image: tomcat
volumeMounts:
- name: svids
mountPath: /var/run/svids
- name: spiffe-helper-sidecar
image: spiffe-helper
command: ["-daemon-mode", "true"]
volumeMounts:
- name: svids
mountPath: /var/run/svids
volumes:
- name: svids
emptyDir:
medium: Memory Unless you're thinking there's a way to have only one spiffe-helper container instead of two? |
Yeah, I think it should only need one instance. |
I don't think a readiness probe will work. I just tested with this apiVersion: v1
kind: Pod
metadata:
name: "native-sidecar-with-failing-readiness-probe-test"
spec:
initContainers:
- name: init-1
image: busybox
args: [ '/bin/sh', '-c', 'echo "init-1 started"; sleep 1h;' ]
restartPolicy: Always
readinessProbe:
exec:
command: ['sh', '-c', 'exit 1']
- name: init-2
image: busybox
args: [ '/bin/sh', '-c', 'echo "init-2 started"; sleep 1h;' ]
restartPolicy: Always
readinessProbe:
exec:
command: ['sh', '-c', 'exit 1']
containers:
- name: main
image: busybox
args: [ '/bin/sh', '-c', 'echo "main started"; sleep 1h;' ] The failing readiness probe didn't prevent the main container from being started (or even the second init container from being started). The doc says the probe will be used to determine the pod's health (which makes sense, since as a sidecar, it will run for as long as the other containers do), but it never says it will block any other container, and it doesn't. If we were thinking of using a file to signal to the main containers when the SVID is ready, wouldn't we still need an init container to wait for that file (although it could be just a shell script and not another instance of the spiffe-helper)? If I'm totally misunderstanding your idea, please correct me. |
s/readinessProbe/startupProbe/ and I think it works. |
Ah, duh. You're right. Startup probe blocks other init containers as well as regular containers. |
It would be great if spiffe-helper could be used as a sidecar under Kubernetes.
This would require two different modes of operation to function well.
This would run as a k8s initContainer and ensure initial cert/key/ca creation before the workload starts.
A container image would also be needed. Requested here: #107
The text was updated successfully, but these errors were encountered: