-
-
Notifications
You must be signed in to change notification settings - Fork 616
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Distribute as Docker Image #1801
Comments
Hi interested people. I see @maehldom opened a PR #1868 with a possible approach. I'm not sure if we're going to use that approach because Goreleaser also support Docker in theory. We'll see which is better... I'd just like to say that I created an organization on Docker Hub and applied to their OSS plan, so we're one step closer to make this happen.
|
@andreynering any particular reason for choosing DockerHub instead of pushing to ghcr.io and linking the package to this repository? |
@mircea-pavel-anton I just assumed Docker Hub was the default option and didn't think about alternatives. Are you aware of any advantages and disadvantages of each platform that I should be aware of? |
@andreynering I generally prefer the GitHub Container Registry (ghcr.io) over DockerHub. For one, DockerHub has more aggresive rate-limiting for pulling images so it can end up being problematic for certain CI/cD workflow scenarios. Also, I personally like using GHCR because it allows me to link one or more packages directly to the repo and also show the README of the repo as a description of the image. You can check out this repo as an example: https://github.com/mirceanton/external-dns-provider-mikrotik |
Also, left some thoughts on #1868 though I think I'd much rather go with the |
Thanks @mircea-pavel-anton. GHCR does look simpler, indeed. I'll think about it. |
@andreynering I threw together this PoC if you are interested in what this would all look like: #1875 It's not yet done, but it should give you a rough idea. Left some notes on the PR |
What is the point of this exactly? To distribute task in a different mechanism (Docker image) only to allow it to be copied into another image ... rather than using an existing installer (such a snap). In such cases a minimal docker file might be better for the distribution: FROM scratch
COPY hello /
CMD ["/hello"] We currently install task into containers this way: ...
# Go Builder
# ==========
FROM golang:bullseye as go
RUN GOBIN=/usr/local/bin go install github.com/rogpeppe/go-internal/cmd/testscript@latest
RUN GOBIN=/usr/local/bin go install github.com/go-task/task/v3/cmd/task@latest
...
COPY --from=go /usr/local/bin/task /usr/local/bin/task but there are many other ways ... observe that I'm not using bookworm either. Copying from an existing container image would certainly be faster than building from source. |
Hi, @trulede ! You're right that, if the point of the dockerfile was just as an "installation" method, it would have been better to use scratch, but that's not really the point here. Sure, the main usecase I personally have is just that, but there are other usecases in which this could make sense. For example:
This is why I chose bookworm as a base. Given that it also runs bash, I was hoping to get bash completion working too for that initial example |
We (I) find task to be useless without other tools which do the work. That makes putting task into a container for general purpose use a little bit futile - you might think otherwise, that's OK. What we typically do is build an image for a purpose, and "mix" task into it. If task came from an existing container, sure, but perhaps also consider exactly which use case is really going to be viable (which distro, version , arch). Also, most of the time we use task to run other containers (Docker in Docker) ... so rather curiously we get by with a fairly generic container image, which contradicts my previous statement a little - but only a little, because the image is still specialised for running testscript. That is what makes me sceptical of the benefit of running a generic task container - it's just a little bit different than those other tools (which act on systems, like k8, via ports etc). Really, it does not bother me one way or the other. Certainly can be useful, as you mention. |
You make valid points, and I generally agree. Having it just by itself is a bit restrictive and it will most likely be "mixed" into larger dockerfiles. I just found it a lot easier to reference a centralized location cause I do this "mixing" process quite a lot. I inject task into a lot of my devcontainers (almost all of them at this point) and having it this way would make that process a bit easier. I am totally fine with maintaining my own image just for that (I already do), if the community does not see value in this. I just thought I'd share in case others are in a similar situation |
My 2 cents I tend to agree with the points made by @trulede. For me personally, I think there is very little benefit to a generic distro container with Task installed. This is not going to fulfil the needs of most Task users as there is always going to be other tooling required for anything more complex than a simple script and you'll just end up mixing your own image anyway. I might find it useful to copy the binary from a scratch image if I were mixing my own image and needed Task. However, as pointed out above, it's already trivial to add task to an image with a multi stage build by compiling Task yourself. With that said, if others would find it useful for whatever reason, then there is no reason to not do this. In fact, there is no reason not to make everyone happy and build one from scratch and another from Debian etc. If we do decide to proceed, I would also favour GHCR for the afore mentioned reasons. Namely DockerHub's awful rate limits and the need for a special OSS plan when GH is free and nicely linked to our repo. |
@pd93 I generally agree that a distro-based image for Task is very very niche in terms of actual usability. A
I would add here that building from scratch does take a considerably longer amount of time than something like copying the binary from another image directly. For my devcontainers, I don't typically push the built image to any registry, I just build them on the spot. That would add a considerable amount of time to my workflow. One thing I'd really like about the docker approach is that it makes it easy to pin the dependency to a specific version in your dockerfile, and even more, it would allow you to easily pin it to a commit SHA. Renovate would then be able to easily update that package too to keep the image up to date if needed. An equivalent approach would be to download precompiled binaries, which sure, would work I guess, but that is quite a bit harder to integrate with renovate for example, afaik. There are workarounds, I completely agree. I don't think that any of those workarounds is as simple from a DX perspective, and adding an extra step to goreleaser to build and push images should be basically 0 complexity add since it just copies the binaries it already builds anyways into some containers. As far as building multiple container images with multiple bases goes, it might be a bit extra? I'd say we could go with |
I would be happy to start with a Task scratch container for now and go from there. We can add some docs that describe how to build your own image by copying the binary across from ours. @mircea-pavel-anton Sidenote: have you looked into https://github.com/devcontainers/features? I use GH codespaces occasionally and these are nice for users who don't want to create a custom Dockerfile for a simple project. I have thought about creating one of these for Task before and submitting it to the official index. It would be nice to maintain this in-house and I'll happily support anyone wanting to help do this. |
Great! Should I add these things to #1875 ?
I used to use them, but something about it just feels rather off to me. If you actually look under the hood, I find it a bit excessive to run scripts that are a few hundred lines of bash just to install a tool, and more often than not (since I suck at bash), I don't fully understand what they're doing 😅 Also, not all tools I need are always available as devcontainer features, so I'd sometimes be left with needing a mixed approach. Some are built locally or copied from docker images, while some are imported via devcontainer features Then again, I did not manage to pin those to specific versions that are then managed by renovate either. So I generally use devcontainer features if I just want to pin to Conceptually, I really dislike how devcontainer features are implemented from the ground up. It seems weird to me that a tool meant to give us container images for development environments wouldn't also package tools and binaries in OCI images as well, but rather use lengthy and error-prone bash scripts to accomplish it. Maybe it's just a personal preference but I very rarely use those. That being said, since we already have an installer script here, I think it would be fairly trivial to get that into the official index? |
I got no response from Docker about the OSS plan (which would guarantee a better rate limit), so I think we're probably going to release to GHCR for now, which also looks to be simpler and is probably good enough. Nothing stops us to also release to DockerHub in the future if we decide so. |
Alrighty then. @andreynering @pd93 I'll update #1875 to use scratch as a base, update the workflow to login to ghcr and add some docs as well. Sounds good? I'm currently in the middle of moving, so I don't have my workstation up and running, but I'll probably get to it in the weekend |
@mircea-pavel-anton No hurry, and thank you for contributing! |
I think it would be helpful to also distribute this tool as a multi-platform Docker image.
My main usecase for this is using that image as a makeshift
build-step
when building out larger images. Like this, for example.Since we're already using
goreleaser
to create the release artifacts, it would be a fairly simple change to instruct it to also build and push the docker images as well.Something like this should do the trick, with an extra docker login step in the goreleaser workflow.
If there is interest in this, I can put together a PR with the proposed changes.
The text was updated successfully, but these errors were encountered: