-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support docker in rootless images #15924
Conversation
to support rootless images
started the job as gitpod-build-ak-docker-rootless-image.1 because the annotations in the pull request description changed |
it fixes the issue. I wonder since runc, docket-compose and docker are part of docker-up image now. Do we need to bundle them as binaries in docker-up as well? Or we can remove them there and reduce the image size? cc @csweichel @utam0k (also @aledbf I seen your changed these files last) |
/werft run with-integration-tests=workspace with-large-vm=true 👍 started the job as gitpod-build-ak-docker-rootless-image.2 |
@utam0k can you help review this PR? For posterity, I triggered integration tests ☝️, and recommended some scenarios for @akosyakov 👇. @akosyakov can you update how to test? I think more scenarios are needed (I shared 4 below). For example, scenarios 2 fails and scenario 4 is worth considering. Also, when I open a workspace using elixir as the base image, I cannot open a bash shell. Is that expected, or is maybe a separate issue?
|
@csweichel this change introduces a couple behaviors (scenarios 2 and 4, from this comment) that differ from gitpod.io. @akosyakov and I were talking, and we're a bit unclear on the spirit of Is it intended to have Is it intended that docker be available in all Gitpod workspaces, even when it wasn't installed as part of the base image (scenario 4)? What if a customer doesn't want docker working in-workspace? We'd be forcing it on them, and not offering an escape-hatch. |
I don’t think it is true also now. Supervisor is always running under root and automatically starts docker daemon under root as well. It is enough for a user to download docker cli as a binary and run it. Removing sudo from image is not preventing it. Only makes harder. |
Good point! That gets users very close to being able to use docker.
What if the underlying network hosting Gitpod is restricted, and prevents access to download docker cli? Without this PR, they'd lack docker cli. With this PR, they'd have docker cli. There might be some cases where it's desired that a CDE not have docker. Presently, folks using custom images can opt-in, by adding docker to their dockerfile. Should we give them some sort of install-time opt-out ability? 🤔 |
Maybe then we should design it properly if it is a requirement, e.g. allow to disable on the project level in the dashboard in such case neither docker-up layer will be added to the workspace, as well as supervisor won't install a docker socket listener. It would truly restrict docker usage. The thing is that Dockerfile is an integral part of Gitpod configuration and ability to quickly iterate on it is core experience. In this sense I think the default should be to bring docker cli by default. If there are additional requirements for some subset of customers to disable then it should be explicit. Right now it feels to be rather a side effect on which some customers maybe rely. |
Don't we need docker-cli to achieve the IDE team's desired feature? I'd like to confirm what we actually need.
Using only the buildkit should be sufficient if these are the only cases. ref: https://earthly.dev/blog/compiling-containers-dockerfiles-llvm-and-buildkit/
And I wonder home many users want to use this debug feature with the rootless image. Is it worth it to influence existing docker users? I think it is sufficient to just give a warning as a user is using a rootless image. Of course, if the demand is high, we will consider implementing it, but I thought that our first priority should be to get normal case users to use it. For example, buildkit also has a rootless mode. I can think of a way to use this, but it would require more time. WDYT? |
we decided not to go with workspacekit, because of slow export of root fs. Instead we use |
I missed this conversation. I got it 👍 |
👍 agreed
💯 good point. For gitpod.io, I think you're right that it'd need to be an opt-out that is accessible on the project or team. For Gitpod Dedicated, maybe a cell could have restricted network access to an image repository, and the images in that repo would have to pass a scan prior to being usable by a Gitpod Workspace. Either way, those will require discovery and iteration for the future.
In hindsight, I agree, but it would be great to get approval on the scenarios from @loujaybee too, from a product standpoint. Lou, what do you think of these 4 scenarios? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- ["mkdir", "unarchived"] | ||
- ["tar", "-zxvf", "components-docker-up--bin-docker-up/docker.tgz", "-C", "unarchived"] | ||
- ["cp", "-vaR", "unarchived/docker/.", "."] | ||
- ["rm", "-r", "unarchived"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@akosyakov wdyt of changing this PR title and description? My understanding is you are trying to get gp rebuild
working, need the docker CLI, but docker support in rootless images (as a feature) is not required/can be decoupled from this effort.
Suggestions:
- As @atduarte suggested, treat embedded docker bin as a portable app. For example, like you mentioned, instead of persisting to
unarchived
, do so in.supervisor/docker/bin
, and then havegp rebuild
expect it to be there. This way:- The dependency for
gp rebuild
on the docker CLI does not break or change the end-user experience for docker CLI - The IDE team is unblocked
- The dependency for
- Inbox an issue to the Workspace Team, to consider installing docker CLI as part of
docker-up
like you have now, so customers getdocker support in rootless images
. At this time, we could also consider deleting.supervisor/docker/bin
. This would be a lower priority given our current priorities.
Regarding deploy for the docker-up
change, we prboably have to ship a new workspace cluster (I do not see docker-up
as a version that can be overridden in ide-config
in the application clusters).
registry-facade
has a specificdocker-up
in its configmap forstaticLayer
:
"staticLayer": [
{
"ref": "eu.gcr.io/gitpod-core-dev/build/supervisor:commit-1e24370e8e1ee19ca3b8d9521e1c205269978680",
"type": "image"
},
{
"ref": "eu.gcr.io/gitpod-core-dev/build/workspacekit:commit-243207eb85e1ac9ada023fc6ba3a4e3118d49d5e",
"type": "image"
},
{
"ref": "eu.gcr.io/gitpod-core-dev/build/docker-up:commit-243207eb85e1ac9ada023fc6ba3a4e3118d49d5e",
"type": "image"
}
],
You shared an internal concern about docker, docker-compose, and runc versions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have another idea right now to let supervisor to do docker build/run under sudo for gp rebuild, then it can activate docker daemon and use shipped cli even if a user don't have sudo access.
I probably can go with the variant one as well, but it adds complexity since we need to ship docker twice, and it does not strike me that shipping random docker cli has valid usage. As you mentioned version of docker-compose already pinned and overrides whatever user has in the workspace image.
I was thinking to suggest removing docker-up from static layer and instead allow ide-service to deliver it. Then we could agree on something simple like merging this PR and if it does not work revert it with an IDE deployment. With that way we would learn faster and does not need to deliver docker twice.
closing, since it is not blocking gp rebuild and docker support needs clarification |
Description
Rootless images without docker cli don't have easy way to install, and without docker cli docker-up cannot be triggered. In fact docker cli already is a part of docker-up. This PR resolves it by preinstalling docker cli and daemon as a part of docker-up image instead of doing at runtime.
Related Issue(s)
fix #15917
How to test
Test in https://github.com/gitpod-samples/template-elixir
Release Notes
Documentation
Build Options:
Experimental feature to run the build with GitHub Actions (and not in Werft).
If enabled this will build
install/preview
Valid options are
all
,workspace
,webapp
,ide
,jetbrains
,vscode
,ssh