Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable rmt chart run on air-gapped environments #12

Merged
merged 1 commit into from
Jul 23, 2024

Conversation

rafaprata
Copy link
Contributor

This PR attempts to enhance the rmt-chart to run on airgapped environments.

SUSE/rmt#1171
#11

@lagartoflojo
Copy link

Thank you @rafaprata ! We'll review the PR as soon as we can.

@ngetahun
Copy link
Contributor

@josegomezr
Copy link

Hi @rafaprata,

Thanks for your contribution. Although this will fix the issue it opens a vector for unsupported (and highly surprising) behavior.

The first problem is: ConfigMap changes won't trigger a restart or a re-deployment of RMT.

The projected volume where the config map ends up mounted gets updated live and it expects that workloads react to this update.

Take as an example: Prometheus helm charts. In the deployment of prometheus there's an optional container whos only work is to listen for filesystem events triggered by [among other things] config maps and call the reload endpoint in prometheus itself to have the "live config reload" experience.

Should we enable this approach, the workload itself [rmt] needs to support listening to changes in the projected configmap, one way or another in order to react to the changes. Without it the only way is to make the change AND THEN restart the deployment [by doing whatever it takes as long as the rmt server pod gets to restart].


The second problem is that the configmap saves the bash entrypoint of the container image. in practice: this allows changing the container image contents pretty much silently.

In the extremely optimistic scenario this allows consumers of RMT to modify how RMT boots in ever creative ways, not necessarily aligning to what's supported, even if they fix the problem at hand.

By the same token it can allow for running effectively anything your heart (and the contained binaries in the rmt-* container images) allows. There's no readiness/startup/liveliness probes that would restart/back-off the startup of the container.

This is an easy vector for security problems. It just takes one silent config map change and a pod reboot to get a seemingly passive workload do very unexpected things. Worse of all, via the configmap the bug can be planted for ages until the restart is done and we get a cyber-security incident.


However, your solution does goes in a good direction, the behavior of the sync must be able to be influenced via values.yaml to allow Kubernetes operators to influence the abilities of the workload without compiling their own RMT image.

Instead of mounting an entrypoint [and thus changing the contents of the container], we should instead allow RMT have more cloud-friendly configuration inputs.

Making it react to environment vars is a first step in my eyes and if I read the docs from the config gem RMT uses right now that should be easily achievable.

So after RMT can be influenced with env-vars we can introduce the appropriate values.yaml entries (maybe even add them to the questions.yml too for interactive installs on Rancher, those I don't fully know they work).

Copy link

@josegomezr josegomezr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Forgot to leave the request changes mark, apologies.

Copy link
Contributor

@ngetahun ngetahun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Will do follow up PRs to add suggestions from Jose

@ngetahun ngetahun dismissed josegomezr’s stale review July 23, 2024 12:00

merging this now

@ngetahun ngetahun merged commit 96750fc into SUSE:main Jul 23, 2024
@ngetahun ngetahun mentioned this pull request Sep 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants