Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Static manifests are not automatically reapplied #5481

Closed
thannaske opened this issue Feb 16, 2024 · 4 comments
Closed

Static manifests are not automatically reapplied #5481

thannaske opened this issue Feb 16, 2024 · 4 comments

Comments

@thannaske
Copy link

thannaske commented Feb 16, 2024

Environmental Info:
RKE2 Version:

rke2 version v1.28.6+rke2r1 (572f367fd08e0e650a278477fdeea551dba48af7)
go version go1.20.13 X:boringcrypto

Node(s) CPU architecture, OS, and Version:

Linux k8s-master-1 5.15.0-89-generic #99-Ubuntu SMP Mon Oct 30 20:42:41 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.4 LTS"

Cluster Configuration: 3 Servers, 4 Agents

Describe the bug:
According to the documentation, one can place YAML manifests under /var/lib/rancher/rke2/server/manifests to be automatically applied to the cluster. While this works perfectly fine a) upon initially writing the file and b) when editing and saving it, when you manually delete the resource the manifest shall create from the cluster, it will not get automatically re-applied. When restarting the rke2-server, it is reapplied. When editing the file and saving it again, it is reapplied. When renaming it, it is reapplied. But not when you just delete its resulting resource from the cluster.

Steps To Reproduce:

  • Install RKE2 on the latest v1.28 Kubernetes version
  • Place a YAML manifest into /var/lib/rancher/rke2/server/manifests, e.g. one to create a random ClusterRole object.
  • Delete the created object using kubectl delete clusterrole foo
  • Wait for several minutes
  • Verify that the object hasn't been automatically recreated
  • Restart the rke2-server systemd unit on one of the rke2-servers
  • Verify that the object has been automatically recreated

Expected behavior:
As with a default kubeadm setup, I'd expect the manifest to be automatically reapplied when the resulting resource is being deleted from the cluster.

Actual behavior:
The manifest is not reapplied until the rke2-server is being restarted.

@dereknola
Copy link
Member

This is expected behavior. We monitor the files for changes and apply them. If you modify the Addon yourself through kubectl, RKE2 will not reapply because the manifest file itself has not changed.

@brandond
Copy link
Member

brandond commented Feb 16, 2024

We cover this better in the k3s docs; I believe @dereknola is working on adapting some of this content for the rke2 docs but in the mean time you can reference: https://docs.k3s.io/installation/packaged-components#auto-deploying-manifests-addons

On server nodes, any file found in /var/lib/rancher/k3s/server/manifests will automatically be deployed to Kubernetes in a manner similar to kubectl apply, both on startup and when the file is changed on disk. Deleting files out of this directory will not delete the corresponding resources from the cluster.

@thannaske
Copy link
Author

I see, thank you for clarifying.

Is there any feature or possibility with RKE2 to enforce manifest application to the cluster even when it's deleted via kubectl? To provide some background: I want to apply some Admission Webhooks and make sure that not even a user with cluster admin permissions is able to remove it.

@brandond
Copy link
Member

brandond commented Feb 16, 2024

No, we are not planning on doing that.

You should use RBAC to ensure that people don't have permission to delete things you don't want them deleting. Having something else that just puts them back is not a sufficient control.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants