Skip to content
This repository has been archived by the owner on Oct 3, 2020. It is now read-only.

Team label issue #174

Open
ghost opened this issue Jun 17, 2020 · 6 comments
Open

Team label issue #174

ghost opened this issue Jun 17, 2020 · 6 comments

Comments

@ghost
Copy link

ghost commented Jun 17, 2020

I've tagged pods and namespace with a tag leaseplan/owner. I also set the env variable OBJECT_LABEL_TEAM to that value but it seems that I can only see a an unknown team like on screenshot:

image

Am I missing something?

@hjacobs
Copy link
Owner

hjacobs commented Jul 2, 2020

Can you list the pods on the command line to show the actual objects?

@andreamaruccia
Copy link

kubectl get pod -A

NAMESPACE     NAME                                                         READY   STATUS    RESTARTS   AGE
kube-system   aws-alb-ingress-controller-5b498bbbd8-7dmqk                  1/1     Running   0          16h
kube-system   aws-node-bvqmz                                               1/1     Running   0          16h
kube-system   aws-node-cvdkj                                               1/1     Running   0          16h
kube-system   aws-node-ddfxj                                               1/1     Running   0          16h
kube-system   cluster-autoscaler-aws-cluster-autoscaler-764649ffd8-ldvq8   1/1     Running   0          16h
kube-system   coredns-85fc454fc8-28q2x                                     1/1     Running   0          16h
kube-system   coredns-85fc454fc8-ccswp                                     1/1     Running   0          16h
kube-system   datadog-agent-ng48x                                          1/1     Running   0          17h
kube-system   datadog-agent-tj8ts                                          1/1     Running   0          17h
kube-system   datadog-agent-vlm6f                                          1/1     Running   0          17h
kube-system   datadog-cluster-agent-689dd7c9cb-9ns5x                       1/1     Running   0          17h
kube-system   external-dns-7dd75d499b-7qc7b                                1/1     Running   0          16h
kube-system   kube-proxy-b2bbp                                             1/1     Running   0          17h
kube-system   kube-proxy-hv6tp                                             1/1     Running   0          17h
kube-system   kube-proxy-qnhlm                                             1/1     Running   0          17h
kube-system   kube-resource-report-9847b67cd-wq6fh                         2/2     Running   0          16h
kube-system   kube-state-metrics-59c99f7985-h4bng                          1/1     Running   0          16h
kube-system   kube2iam-4cx9c                                               1/1     Running   0          17h
kube-system   kube2iam-8p5h7                                               1/1     Running   0          17h
kube-system   kube2iam-pp2ph                                               1/1     Running   0          17h
kube-system   metrics-server-659cd79794-nr4ng                              1/1     Running   0          16h
kube-system   nginx-ingress-controller-64d855bdbc-7kw8j                    1/1     Running   0          16h
kube-system   nginx-ingress-controller-64d855bdbc-7wxkm                    1/1     Running   0          16h
kube-system   nginx-ingress-default-backend-77d44b877d-7mpsc               1/1     Running   0          16h
kube-system   opa-6b596479bd-2p7mz                                         2/2     Running   0          16h
kube-system   opa-6b596479bd-vk2gx                                         2/2     Running   0          16h
kube-system   opa-6b596479bd-zfxg4                                         2/2     Running   0          16h
velero        velero-58bc4b8f48-hppwn                                      1/1     Running   0          16h

kubectl describe pod kube-resource-report-9847b67cd-wq6fh

Name:         kube-resource-report-9847b67cd-wq6fh
Namespace:    kube-system
Priority:     0
Node:         ip-<redacted>.eu-west-1.compute.internal/<redacted>
Start Time:   Thu, 02 Jul 2020 16:52:28 +0200
Labels:       app.kubernetes.io/instance=kube-resource-report
              app.kubernetes.io/name=kube-resource-report
              pod-template-hash=9847b67cd
Annotations:  kubernetes.io/psp: eks.privileged
Status:       Running
IP:           <redacted>
IPs:
  IP:           <redacted>
Controlled By:  ReplicaSet/kube-resource-report-9847b67cd
Containers:
  kube-resource-report:
    Container ID:  docker://6f26349215b1e121e16ac29d9da851313170cf02abc2482f0d69390bad9e2a05
    Image:         hjacobs/kube-resource-report:20.4.5
    Image ID:      docker-pullable://hjacobs/kube-resource-report@sha256:ba9174688b839f79e7c77b161d75b10043435179c42cc483b4d4f7a900054872
    Port:          <none>
    Host Port:     <none>
    Args:
      --update-interval-minutes=1
      /output
    State:          Running
      Started:      Thu, 02 Jul 2020 16:52:37 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  100Mi
    Requests:
      cpu:     5m
      memory:  50Mi
    Environment:
      DEFAULT_CLUSTER_NAME:  lpd-sandbox2-v2
      OBJECT_LABEL_TEAM:     leaseplan/owner
    Mounts:
      /output from report-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-resource-report-token-h6xgs (ro)
  nginx:
    Container ID:   docker://554c60de33c185941c687dac871f9179e9e92eb8730cbabe3777b7d1359fd418
    Image:          nginx:alpine
    Image ID:       docker-pullable://nginx@sha256:b89a6ccbda39576ad23fd079978c967cecc6b170db6e7ff8a769bf2259a71912
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 02 Jul 2020 16:52:41 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  50Mi
    Requests:
      cpu:        5m
      memory:     20Mi
    Readiness:    http-get http://:8080/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/nginx/conf.d/default.conf from nginx-config (ro,path="default.conf")
      /etc/nginx/nginx.conf from nginx-config (ro,path="nginx.conf")
      /tmp from nginx-tmp (rw)
      /usr/share/nginx/html from report-data (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-resource-report-token-h6xgs (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  nginx-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-resource-report-nginx
    Optional:  false
  nginx-tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  report-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  500Mi
  kube-resource-report-token-h6xgs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-resource-report-token-h6xgs
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

@hjacobs
Copy link
Owner

hjacobs commented Jul 3, 2020

Can you describe a pod where you set the team label (leaseplan/owner)? The shown pod does not have it.

@andreamaruccia
Copy link

andreamaruccia commented Jul 3, 2020

got it now this should be it:

kubectl describe pod -n andrea-tests andrea-test-andrea-test-55b97969d7-2lbc7

Name:         andrea-test-andrea-test-55b97969d7-2lbc7
Namespace:    andrea-tests
Priority:     0
Node:         ip-<redacted>.eu-west-1.compute.internal/<redacted>
Start Time:   Fri, 03 Jul 2020 10:35:36 +0200
Labels:       app=andrea-test-andrea-test
              pod-template-hash=55b97969d7
Annotations:  ad.datadoghq.com/andrea-test-andrea-test.logs: [{"source":"andrea-test-andrea-test","service":"andrea-test-andrea-test"}]
              ad.datadoghq.com/tags:
                {"leaseplan-version": "027050e9", "leaseplan-environment": "lpd-sandbox", "leaseplan-cost-center": "<redacted>", "leaseplan-owner": "andrea",...
              app.gitlab.com/app: andrea-maruccia-ansible-playground
              app.gitlab.com/env: sandbox
              configHash: f47bb090cd5fb84a534a660f57c4d48358e3a151c2e1dc3c6380ee3f7fa0f4f5
              iam.amazonaws.com/role: some-role
              kubernetes.io/psp: eks.privileged
              leaseplan/alert-channel: not-set
              leaseplan/cost-center: <redacted>
              leaseplan/data-classification: no-sensitive-data
              leaseplan/data-expiry-time: no-exp
              leaseplan/entity: lpdi
              leaseplan/environment: lpd-sandbox
              leaseplan/owner: andrea
              leaseplan/product: andrea-test
              leaseplan/project-name: ansible-playground
              leaseplan/project-namespace: andrea.maruccia
              leaseplan/service: andrea-test
              leaseplan/version: 027050e9
Status:       Running
IP:           <redacted>
IPs:
  IP:           <redacted>
Controlled By:  ReplicaSet/andrea-test-andrea-test-55b97969d7
Containers:
  andrea-test-andrea-test:
    Container ID:   docker://8aa6676852f9b173945b1a12e60861cb780dfacaa2656e9e34c2a3373bf27b6b
    Image:          <redacted>.dkr.ecr.eu-west-1.amazonaws.com/andrea-test/andrea-test:027050e9-173193
    Image ID:       docker-pullable://<redacted>.dkr.ecr.eu-west-1.amazonaws.com/andrea-test/andrea-test@sha256:f856ecd890d231d1dcf8fed2d9468de52b777860c7c00a6f958ef4046ae1936d
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 03 Jul 2020 10:35:37 +0200
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:                500m
      ephemeral-storage:  250M
      memory:             256Mi
    Requests:
      cpu:                250m
      ephemeral-storage:  150M
      memory:             256Mi
    Liveness:             http-get http://:80/ delay=5s timeout=4s period=5s #success=1 #failure=3
    Environment Variables from:
      config-andrea-test-andrea-test  ConfigMap  Optional: false
    Environment:                      <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7pl2r (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-7pl2r:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-7pl2r
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age    From                                                  Message
  ----    ------     ----   ----                                                  -------
  Normal  Scheduled  2m42s  default-scheduler                                     Successfully assigned andrea-tests/andrea-test-andrea-test-55b97969d7-2lbc7 to ip-<redacted>.eu-west-1.compute.internal
  Normal  Pulled     2m41s  kubelet, ip-<redacted>.eu-west-1.compute.internal  Container image "<redacted>.dkr.ecr.eu-west-1.amazonaws.com/andrea-test/andrea-test:027050e9-173193" already present on machine
  Normal  Created    2m41s  kubelet, ip-<redacted>.eu-west-1.compute.internal  Created container andrea-test-andrea-test
  Normal  Started    2m41s  kubelet, ip-<redacted>.eu-west-1.compute.internal  Started container andrea-test-andrea-test

@hjacobs
Copy link
Owner

hjacobs commented Jul 3, 2020

@andreamaruccia you are using annotations instead of having leaseplan/owner as a label.

@andreamaruccia
Copy link

andreamaruccia commented Jul 4, 2020

ouch, yeah this is where we put our metadata. It seemed the right place since we did not use it in any selectors. I guess that we'll have to move it to labels since this is how kube-resource-report is able to select objects right?

edit: tested with labels it worked instantly

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants