Skip to content

Latest commit

 

History

History
143 lines (102 loc) · 4.05 KB

README.adoc

File metadata and controls

143 lines (102 loc) · 4.05 KB

Run ComfyUI with ROCm on AMD GPU

Note: Image building is required

This Docker image is too big to build on GitHub Actions (will throw error "No space left on device"). So before running, a building step (basically downloading) is needed. The commands below contain the steps.

Prepare

Build & Run

You may need to add these configuration (especially for APUs) into the command of docker run, podman run below. (Credit to nhtua )

  • For RDNA and RDNA 2 cards: -e HSA_OVERRIDE_GFX_VERSION=10.3.0 \

  • For RDNA 3 cards: -e HSA_OVERRIDE_GFX_VERSION=11.0.0 \

  • Integrated graphics on CPU: -e HIP_VISIBLE_DEVICES=0 \

With Docker
git clone https://github.com/YanWenKun/ComfyUI-Docker.git

cd ComfyUI-Docker/rocm

docker build . -t yanwk/comfyui-boot:rocm

mkdir -p storage

docker run -it --rm \
  --name comfyui-rocm \
  --device=/dev/kfd --device=/dev/dri \
  --group-add=video --ipc=host --cap-add=SYS_PTRACE \
  --security-opt seccomp=unconfined \
  --security-opt label=disable \
  -p 8188:8188 \
  -v "$(pwd)"/storage:/root \
  -e CLI_ARGS="" \
  yanwk/comfyui-boot:rocm
With Podman
git clone https://github.com/YanWenKun/ComfyUI-Docker.git

cd ComfyUI-Docker/rocm

podman build . -t yanwk/comfyui-boot:rocm

mkdir -p storage

podman run -it --rm \
  --name comfyui-rocm \
  --device=/dev/kfd --device=/dev/dri \
  --group-add=video --ipc=host --cap-add=SYS_PTRACE \
  --security-opt seccomp=unconfined \
  --security-opt label=disable \
  -p 8188:8188 \
  -v "$(pwd)"/storage:/root \
  -e CLI_ARGS="" \
  yanwk/comfyui-boot:rocm

Once the app is loaded, visit http://localhost:8188/

If you want to dive in…​

(Just side notes. Nothing to do with this Docker image)

ROCm has a PyTorch image:

docker pull rocm/pytorch:rocm6.2.3_ubuntu22.04_py3.10_pytorch_release_2.3.0

mkdir -p storage

docker run -it --rm \
  --name comfyui-rocm \
  --device=/dev/kfd --device=/dev/dri \
  --group-add=video --ipc=host --cap-add=SYS_PTRACE \
  --security-opt seccomp=unconfined \
  --security-opt label=disable \
  -p 8188:8188 \
  --user root \
  --workdir /root/workdir \
  -v "$(pwd)"/storage:/root/workdir \
  rocm/pytorch:rocm6.2.3_ubuntu22.04_py3.10_pytorch_release_2.3.0 \
  /bin/bash

git clone https://github.com/comfyanonymous/ComfyUI.git

# Or use conda
pip install -r ComfyUI/requirements.txt

# Or python3
python ComfyUI/main.py --listen --port 8188

It’s big, but if you find it hard to run the container, it may be helpful. As it takes care of PyTorch, the most important part, and you just need to install few more Python packages in order to run ComfyUI.

Additional notes for Windows users

(Just side notes. Nothing to do with this Docker image)

WSL2 supports ROCm and DirectML.

  • ROCm

If your GPU is in the Compatibility List, you can either install Radeon software in your WSL2 distro, or use ROCm PyTorch image.

  • DirectML

DirectML works for most GPUs (including AMD APU, Intel GPU). It’s slower than ROCm but still faster than CPU. See: Run ComfyUI on WSL2 with DirectML.

  • ZLUDA

This is not using WSL2, it’s running natively on Windows. ZLUDA can "translate" CUDA codes to run on AMD GPUs. But as the first step, I recommend to try running SD-WebUI with ZLUDA, it’s easier to start with.