Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vmi start fails to start when both pods and vms label is opted in #140

Open
RamLavi opened this issue May 10, 2020 · 2 comments
Open

vmi start fails to start when both pods and vms label is opted in #140

RamLavi opened this issue May 10, 2020 · 2 comments

Comments

@RamLavi
Copy link
Member

RamLavi commented May 10, 2020

What happened:
when trying to run vm - getting the following error:

failed to create virtual machine pod: admission webhook "mutatepods.kubemacpool.io" denied the request: failed to allocate requested mac address

What you expected to happen:
the mutatepods should ignore this pod since it is controlled by kubevirt (aka this mac is already given to the vm instance). running the vm should be successful.

How to reproduce it (as minimally and precisely as possible):

  1. apply namespace with opt-in lables of vms and pods:
apiVersion: v1
kind: Namespace
metadata:
  labels:
    mutatepods.kubemacpool.io: allocateForAll
    mutatevirtualmachines.kubemacpool.io: allocateForAll
  name: kmp-opt-in-ns

  1. apply this NAD:
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: "kmp-opt-br"
  namespace: "kmp-opt-in-ns"
  annotations:
    k8s.v1.cni.cncf.io/resourceName: "bridge.network.kubevirt.io/kmp-opt-br"
spec:
  config: '{
    "cniVersion": "0.3.1",
    "name": "kmp-opt-br",
    "plugins": [{
      "type": "cnv-bridge",
      "bridge": "kmp-opt-br"
    }]
}'
  1. apply this vm:
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  labels:
    special: kmp-opt-vm
  name: kmp-opt-vm
  namespace: kmp-opt-in-ns
spec:
  running: false
  template:
    metadata:
      labels:
        kubevirt.io/vm: vm-fedora
    spec:
      domain:
        devices:
          disks:
          - disk:
              bus: virtio
            name: containerdisk
          - disk:
              bus: virtio
            name: cloudinitdisk
          interfaces:
            - name: default
              masquerade: {}
            - name: kmp-opt-br
              bridge: {}
          rng: {}
        machine:
          type: ""
        resources:
          requests:
            memory: 1024M
      terminationGracePeriodSeconds: 0
      networks:
        - name: default
          pod: {}
        - multus:
            networkName: kmp-opt-br
          name: kmp-opt-br
      volumes:
      - containerDisk:
          image: quay.io/redhat/cnv-tests-fedora-staging:31
        name: containerdisk
      - cloudInitNoCloud:
          userData: |-
            #cloud-config
            password: fedora
            chpasswd: { expire: False }
        name: cloudinitdisk
  1. start vm
virtctl start kmp-opt-vm

Anything else we need to know?:
Note that this is not related to the opt-in functionality. on the contrary - the opt-in functionality gives us a workaround where we only use the vm label and then can avoid this.

Environment:
KUBEVIRT_PROVIDER=k8s-1.17

@RamLavi
Copy link
Member Author

RamLavi commented May 10, 2020

when virt-controller creates the virt-launcher pod, the pod is caught by the kmp webhook for pods.
It is then checked if it related to kubevirt vmi (isRelatedToKubevirt())

I think that because from some reason the webhook pod request i received with no namespace, then the vmi is recognized correctly and considered as a regular pod:

2020-05-10T15:50:26.199Z	DEBUG	Webhook mutatepods	got a create pod event	{"podName": "", "podNamespace": ""}
2020-05-10T15:50:26.199Z	DEBUG	PoolManager	test	{"requestURI": "apis/kubevirt.io/v1alpha3/namespaces//virtualmachines/kmp-opt-vm"}
2020-05-10T15:50:26.200Z	DEBUG	PoolManager	get kubevirt virtual machine object response	{"err": "the server could not find the requested resource", "response": ""}
2020-05-10T15:50:26.200Z	DEBUG	PoolManager	this pod is an ephemeral vmi object allocating mac as a regular pod

@RamLavi
Copy link
Member Author

RamLavi commented May 12, 2020

seems like issue was introduced in commit: bump kubvirt.io to v0.25.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
1 participant