We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I have added the following to my values.yaml
extraContainers: - name: pihole-keepalived image: osixia/keepalived tag: 2.0.20 imagePullPolicy: IfNotPresent resources: requests: cpu: 100m memory: 5Mi limits: cpu: 100m memory: 5Mi volumeMounts: - name: keepalived-config mountPath: /container/service/keepalived/assets/keepalived.conf subPath: keepalived.conf securityContext: privileged: true capabilities: - NET_ADMIN - NET_BROADCAST - NET_RAW extraVolumes: keepalived-config: configMap: name: pi-hole-keepalived-config extraObjects: - apiVersion: v1 kind: ConfigMap metadata: name: pi-hole-keepalived-config data: keepalived.conf: | ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_instance VI_1 { interface eth0 virtual_router_id 50 nopreempt priority 100 advert_int 1 virtual_ipaddress { 192.168.200.11 192.168.200.12 192.168.200.13 } } virtual_server 10.10.10.2 1358 { delay_loop 6 lb_algo rr lb_kind NAT persistence_timeout 50 protocol TCP sorry_server 192.168.200.200 1358 real_server 192.168.200.2 1358 { weight 1 HTTP_GET { url { path /testurl3/test.jsp digest 640205b7b0fc66c1ea91c463fac6334d } connect_timeout 3 retry 3 delay_before_retry 3 } } }
but there is no extra container started.
im very new to kubernetes and have no idea how helm charts work yet, so i cant debug it myself.
i have no logs for the extra container i can find and kubectl describe also dont show my extra container....
Name: pihole-858767b84f-nrv4f Namespace: dns Priority: 0 Service Account: default Node: controller03/192.168.11.3 Start Time: Sun, 08 Dec 2024 19:25:37 +0100 Labels: app=pihole app.kubernetes.io/name=pihole pod-template-hash=858767b84f release=pihole Annotations: checksum.config.adlists: b92bc412d85451fd25794c35d4001bc796e1538d49b55bdf295a8225b3f4666 checksum.config.blacklist: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546 checksum.config.dnsmasqConfig: 53978aac3550eac7e5077fcf7f50eedde465579b2e353e5bf70c1734609c2b7 checksum.config.regex: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546 checksum.config.staticDhcpConfig: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546 checksum.config.whitelist: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546 kubectl.kubernetes.io/restartedAt: 2024-12-08T19:25:37+01:00 Status: Running IP: 10.244.2.6 IPs: IP: 10.244.2.6 Controlled By: ReplicaSet/pihole-858767b84f Containers: pihole: Container ID: containerd://fec017f45ee4bbfc80b7e6ef70fafdbec22e9dd42df9bb0d66b02165e7ba8c90 Image: pihole/pihole:2024.07.0 Image ID: docker.io/pihole/pihole@sha256:0def896a596e8d45780b6359dbf82fc8c75ef05b97e095452e67a0a4ccc95377 Ports: 80/TCP, 53/TCP, 53/UDP, 443/TCP, 67/UDP Host Ports: 0/TCP, 0/TCP, 0/UDP, 0/TCP, 0/UDP State: Running Started: Sun, 08 Dec 2024 19:25:38 +0100 Ready: True Restart Count: 0 Liveness: http-get http://:http/admin/index.php delay=60s timeout=5s period=10s #success=1 #failure=10 Readiness: http-get http://:http/admin/index.php delay=60s timeout=5s period=10s #success=1 #failure=10 Environment: WEB_PORT: 80 VIRTUAL_HOST: pi.hole WEBPASSWORD: <set to the key 'password' in secret 'pihole-password'> Optional: false PIHOLE_DNS_: 8.8.8.8;8.8.4.4 Mounts: /etc/addn-hosts from custom-dnsmasq (rw,path="addn-hosts") /etc/dnsmasq.d/02-custom.conf from custom-dnsmasq (rw,path="02-custom.conf") /etc/pihole from config (rw) /etc/pihole/adlists.list from adlists (rw,path="adlists.list") /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zmf4l (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready True ContainersReady True PodScheduled True Volumes: config: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> custom-dnsmasq: Type: ConfigMap (a volume populated by a ConfigMap) Name: pihole-custom-dnsmasq Optional: false adlists: Type: ConfigMap (a volume populated by a ConfigMap) Name: pihole-adlists Optional: false kube-api-access-zmf4l: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 52m default-scheduler Successfully assigned dns/pihole-858767b84f-nrv4f to controller03 Normal Pulled 52m kubelet Container image "pihole/pihole:2024.07.0" already present on machine Normal Created 52m kubelet Created container pihole Normal Started 52m kubelet Started container pihole
thx in advance
The text was updated successfully, but these errors were encountered:
No branches or pull requests
I have added the following to my values.yaml
but there is no extra container started.
im very new to kubernetes and have no idea how helm charts work yet, so i cant debug it myself.
i have no logs for the extra container i can find and kubectl describe also dont show my extra container....
thx in advance
The text was updated successfully, but these errors were encountered: