You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Firstly, thank you very much for creating these images.
I'm facing this issue while I'm using the "yanwk/comfyui-boot:cu121" docker image with docker compose.
Running 'nvidia-smi' inside the container shows that it is using the GPU, and I used the first workflow from https://comfyanonymous.github.io/ComfyUI_examples/flux/ the "Flux Dev" workflow.
It seems to be working fine, all models are loaded, but although it uses some VRAM from the GPU it just goes up to 30% usage, and it is not using even half of the VRAM memory, while it consumes all my system RAM and breaks when it runs the SamplerCustomAdvanced node.
I have 34 GB of RAM and I have two GPUs the system uses one and the other is the one used by the comfyui docker image with 24GB of VRAM.
To me, it looks like an issue of this image, it should be using the VRAM, not the RAM, but maybe I am doing something wrong.
The text was updated successfully, but these errors were encountered:
I can't exactly reproduce your issue. But I guess it's related to the UNET workflow - when loading models, it's consuming all my 32G RAM as well, then free some RAM and start consuming 12G VRAM, then 100% GPU usage.
And I'm running fine with the checkpoint version workflow. So my ideas are:
Thank you very much for your answer @YanWenKun !
I'll try this options, regarding the option 4, I use Ubuntu 24.04 and I already have the last versions available of the drivers, but for the option 3, I think "cu124-megapak" is too bulky, would be nice to have a slim version of cu124.
I read that fp8 and schnell reduce the quality of the generated images compared to original "Flux Dev", so I might end up buying more RAM if that is the case.
At the end the only solution was to increase the RAM memory, seems like first loads the models to the RAM and from there to the GPU. It's not a problem for me anymore, but could be a problem for others.
Firstly, thank you very much for creating these images.
I'm facing this issue while I'm using the "yanwk/comfyui-boot:cu121" docker image with docker compose.
Running 'nvidia-smi' inside the container shows that it is using the GPU, and I used the first workflow from https://comfyanonymous.github.io/ComfyUI_examples/flux/ the "Flux Dev" workflow.
It seems to be working fine, all models are loaded, but although it uses some VRAM from the GPU it just goes up to 30% usage, and it is not using even half of the VRAM memory, while it consumes all my system RAM and breaks when it runs the SamplerCustomAdvanced node.
I have 34 GB of RAM and I have two GPUs the system uses one and the other is the one used by the comfyui docker image with 24GB of VRAM.
To me, it looks like an issue of this image, it should be using the VRAM, not the RAM, but maybe I am doing something wrong.
The text was updated successfully, but these errors were encountered: