Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade Linux Kernel for main from 6.6.59 to 6.6.60 #2438

Merged
merged 1 commit into from
Nov 11, 2024

Conversation

flatcar-infra
Copy link

Subject says it all.

Copy link

Test report for 4148.0.0+nightly-20241108-2100 / amd64 arm64

Platforms tested : qemu_uefi-amd64 qemu_update-amd64 qemu_uefi-arm64 qemu_update-arm64

ok bpf.execsnoop 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok bpf.local-gadget 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cgroupv1 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.multipart-mime 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.script 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.discovery 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.etcdctlv3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.v2-backup-restore 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.filesystem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.flannel.udp 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.flannel.vxlan 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.instantiated.enable-unit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.kargs 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.luks 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.indirect 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.indirect.new 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _oem.go:199: Couldn_t reboot machine: machine __2ceca2d9-fb98-4acc-bf2c-3a2bd299124d__ failed basic checks: some systemd units failed:"
    L2: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L3: "status: "
    L4: "journal:-- No entries --"
    L5: "harness.go:602: Found systemd unit failed to start (?[0;1;39mldconfig.service?[0m - Rebuild Dynamic Linker Cache.  ) on machine 2ceca2d9-fb98-4acc-bf2c-3a2bd299124d console_"
    L6: " "
    L7: "  "

ok cl.ignition.oem.regular 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.regular.new 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.reuse 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.wipe 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.partition_on_boot_disk 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.symlink 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.translation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.ext4checkexisting 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.swap 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.vfat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.install.cloudinit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.internet 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.locksmith.cluster 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.misc.falco 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.network.initramfs.second-boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.listeners 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.wireguard 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.omaha.ping 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.osreset.ignition-rerun 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.overlay.cleanup 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.swap_activation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.fallbackdownload # SKIP 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tang.nonroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tang.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.toolbox.dnf-install 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.eventlog 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.nonroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.root-cryptenroll 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.root-cryptenroll-pcr-noupdate 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.root-cryptenroll-pcr-withupdate 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.badverity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.grubnop 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.update.payload 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (1)

ok cl.update.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.users.shells 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.verity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.auth.verify 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.local 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.s3.versioned 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.security.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.systemd.enable-service 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.boolean 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.enforce 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.tls.fetch-urls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.update.badusr 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.systemd-nspawn 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.btrfs-storage 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.containerd-restart 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.enable-service.sysext 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.lib-coreos-dockerd-compat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.network-openbsd-nc 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.selinux 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.userns 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok extra-test.[first_dual].cl.update.docker-btrfs-compat 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (1)

ok extra-test.[first_dual].cl.update.payload 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (1)

ok kubeadm.v1.29.2.calico.base 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _cluster.go:125: I1111 13:45:39.009723    1895 version.go:256] remote version is much newer: v1.31.2; falling back to: stable-1.29"
    L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.29.10"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.29.10"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.29.10"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.29.10"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.11.1"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.10-0"
    L9: "cluster.go:125: I1111 13:45:49.765211    2132 version.go:256] remote version is much newer: v1.31.2; falling back to: stable-1.29"
    L10: "cluster.go:125: [init] Using Kubernetes version: v1.29.10"
    L11: "cluster.go:125: [preflight] Running pre-flight checks"
    L12: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
    L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L15: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L16: "cluster.go:125: W1111 13:45:55.323951    2132 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L17: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L18: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L19: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L20: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?5]"
    L21: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L22: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L23: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L28: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L29: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L30: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L31: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L34: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L35: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L36: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L41: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L42: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L43: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L44: "cluster.go:125: [apiclient] All control plane components are healthy after 6.002078 seconds"
    L45: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L46: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L47: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L48: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L49: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L50: "cluster.go:125: [bootstrap-token] Using token: 2odsxt.k9hkkbryi1pc4b27"
    L51: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L53: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L54: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L55: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L56: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L57: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L58: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L59: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L60: "cluster.go:125: "
    L61: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L62: "cluster.go:125: "
    L63: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L64: "cluster.go:125: "
    L65: "cluster.go:125:   mkdir -p $HOME/.kube"
    L66: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L67: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L68: "cluster.go:125: "
    L69: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L70: "cluster.go:125: "
    L71: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L72: "cluster.go:125: "
    L73: "cluster.go:125: You should now deploy a pod network to the cluster."
    L74: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L75: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L76: "cluster.go:125: "
    L77: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L78: "cluster.go:125: "
    L79: "cluster.go:125: kubeadm join 10.0.0.15:6443 --token 2odsxt.k9hkkbryi1pc4b27 _"
    L80: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:0a63c749c085e9d1615110e7a0909e5190401bd0387dfc01de9f628a1ce64f97 "
    L81: "cluster.go:125: namespace/tigera-operator created"
    L82: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L83: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L84: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L85: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L86: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L87: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L88: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L89: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L90: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L91: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L92: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L93: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L94: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L95: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L96: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L100: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tiers.crd.projectcalico.org created"
    L101: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/adminnetworkpolicies.policy.networking.k8s.io created"
    L102: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L103: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L104: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L105: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L106: "cluster.go:125: serviceaccount/tigera-operator created"
    L107: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L108: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L109: "cluster.go:125: deployment.apps/tigera-operator created"
    L110: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L111: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L112: "cluster.go:125: installation.operator.tigera.io/default created"
    L113: "cluster.go:125: apiserver.operator.tigera.io/default created"
    L114: "cluster.go:125: [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
    L115: "cluster.go:125: error execution phase preflight: couldn_t validate the identity of the API Server: configmaps __cluster-info__ not found"
    L116: "cluster.go:125: To see the stack trace of this error execute with --v=5 or higher"
    L117: "kubeadm.go:193: unable to setup cluster: unable to run worker script: Process exited with status 1_"
    L118: " "
    L119: "  "

ok kubeadm.v1.29.2.calico.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.29.2.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.29.2.cilium.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.29.2.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.29.2.flannel.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _cluster.go:125: I1111 13:45:37.918094    1949 version.go:256] remote version is much newer: v1.31.2; falling back to: stable-1.29"
    L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.29.10"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.29.10"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.29.10"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.29.10"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.11.1"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.10-0"
    L9: "cluster.go:125: I1111 13:45:49.762535    2212 version.go:256] remote version is much newer: v1.31.2; falling back to: stable-1.29"
    L10: "cluster.go:125: [init] Using Kubernetes version: v1.29.10"
    L11: "cluster.go:125: [preflight] Running pre-flight checks"
    L12: "cluster.go:125:  [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
    L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L15: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L16: "cluster.go:125: W1111 13:45:50.387618    2212 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L17: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L18: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L19: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L20: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?4]"
    L21: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L22: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L23: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L28: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L29: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L30: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L31: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __super-admin.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L34: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L35: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L36: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L41: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L42: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L43: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L44: "cluster.go:125: [apiclient] All control plane components are healthy after 9.001056 seconds"
    L45: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L46: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L47: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L48: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L49: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L50: "cluster.go:125: [bootstrap-token] Using token: yu5pll.bgxtoxxgv25nn1bo"
    L51: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L53: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L54: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L55: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L56: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L57: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L58: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L59: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L60: "cluster.go:125: "
    L61: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L62: "cluster.go:125: "
    L63: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L64: "cluster.go:125: "
    L65: "cluster.go:125:   mkdir -p $HOME/.kube"
    L66: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L67: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L68: "cluster.go:125: "
    L69: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L70: "cluster.go:125: "
    L71: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L72: "cluster.go:125: "
    L73: "cluster.go:125: You should now deploy a pod network to the cluster."
    L74: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L75: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L76: "cluster.go:125: "
    L77: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L78: "cluster.go:125: "
    L79: "cluster.go:125: kubeadm join 10.0.0.14:6443 --token yu5pll.bgxtoxxgv25nn1bo _"
    L80: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:cea942bb489b0effd2242674d3def348b06098004fa37c4717612cd1df53c22e "
    L81: "cluster.go:125: namespace/kube-flannel created"
    L82: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/flannel created"
    L83: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/flannel created"
    L84: "cluster.go:125: serviceaccount/flannel created"
    L85: "cluster.go:125: configmap/kube-flannel-cfg created"
    L86: "cluster.go:125: daemonset.apps/kube-flannel-ds created"
    L87: "cluster.go:125: [WARNING Service-Kubelet]: kubelet service is not enabled, please run _systemctl enable kubelet.service_"
    L88: "cluster.go:125: error execution phase preflight: couldn_t validate the identity of the API Server: cluster CA found in cluster-info ConfigMap is invalid: none of the public keys __sha256:0a63c749c085e?9d1615110e7a0909e5190401bd0387dfc01de9f628a1ce64f97__ are pinned"
    L89: "cluster.go:125: To see the stack trace of this error execute with --v=5 or higher"
    L90: "kubeadm.go:193: unable to setup cluster: unable to run worker script: Process exited with status 1_"
    L91: " "
    L92: "  "

ok kubeadm.v1.30.1.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.30.1.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.30.1.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.31.0.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.31.0.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.31.0.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v4 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.ntp 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok misc.fips 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok packages 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.custom-docker.sysext 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.custom-oem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.disable-containerd 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.disable-docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.simple 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.user 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.sysusers.gshadow 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

@dongsupark dongsupark merged commit abd2806 into main Nov 11, 2024
7 checks passed
@dongsupark dongsupark deleted the linux-6.6.60-main branch November 11, 2024 15:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Development

Successfully merging this pull request may close these issues.

2 participants