-
Derived from
cu121-megapak
-
Dev kits:
-
CUDA dev kit (12.4)
-
Python dev package (3.12)
-
GCC C++ (13)
-
OpenCV-devel
-
CMake, Ninja…
-
-
Latest stable version of xFormers + PyTorch
-
Tools:
-
Vim, Fish, fd…
-
mkdir -p storage
docker run -it --rm \
--name comfyui-cu124-mega \
--gpus all \
-p 8188:8188 \
-v "$(pwd)"/storage:/root \
-e CLI_ARGS="--fast" \
yanwk/comfyui-boot:cu124-megapak
mkdir -p storage
podman run -it --rm \
--name comfyui-cu124-mega \
--device nvidia.com/gpu=all \
--security-opt label=disable \
-p 8188:8188 \
-v "$(pwd)"/storage:/root \
-e CLI_ARGS="--fast" \
docker.io/yanwk/comfyui-boot:cu124-megapak
The container will run the download script on the first start
and will create an empty file .download-complete
as a marker when the download is complete.
If the download does not complete, on the next start, the download script will resume downloading (using aria2’s mechanism).
You can create the .download-complete
file to skip the download script.
mkdir -p storage
touch storage/.download-complete
args | description |
---|---|
--lowvram |
If your GPU only has 4GB VRAM. |
--novram |
If adding --lowvram still out-of-memory. |
--cpu |
Run on CPU. It’s pretty slow. |
--use-pytorch-cross-attention |
If you don’t want to use xFormers. This may perform well on WSL2, but significantly slower on Linux hosts. |
--preview-method taesd |
Enable higher-quality previews with TAESD. ComfyUI-Manager would override this (settings available in Manager UI). |
--front-end-version Comfy-Org/ComfyUI_frontend@latest |
Use the most up-to-date frontend version. |
--fast |
Enable experimental optimizations. Currently the only optimization is float8_e4m3fn matrix multiplication on 4000/ADA series Nvidia cards or later. Might break things/lower quality. See the commit. |
More CLI_ARGS
available at
ComfyUI.
Variable | Example Value | Memo |
---|---|---|
HTTP_PROXY |
Set HTTP proxy. |
|
PIP_INDEX_URL |
Set mirror site for Python Package Index. |
|
HF_ENDPOINT |
Set mirror site for HuggingFace Hub. |
|
HF_TOKEN |
'hf_your_token' |
Set HuggingFace Access Token. More |
HF_HUB_ENABLE_HF_TRANSFER |
1 |
Enable HuggingFace Hub experimental high-speed file transfers. Only make sense if you have >1000Mbps and VERY STABLE connection (e.g. cloud server). More |
TORCH_CUDA_ARCH_LIST |
7.5 |
Build target for PyTorch and its extensions. For most users, no setup is needed as it will be automatically selected on Linux. When needed, you only need to set one build target just for your GPU. More |
CMAKE_ARGS |
(Default) |
Build options for CMAKE for projects using CUDA. |