You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How much VRAM is needed to run, why can't I use Google Colab to run it? It doesn't report any errors, interrupts directly, and there are no issues with video memory or memory overflow
#7
Open
libai-lab opened this issue
Oct 24, 2024
· 3 comments
How much VRAM is needed to run, why can't I use Google Colab to run it? It doesn't report any errors, interrupts directly, and there are no issues with video memory or memory overflow
The text was updated successfully, but these errors were encountered:
VistaDream need about 22G VRAM to run. However, VistaDream needs to preload multiple models, which demands additional memory. For more details, please refer to the discussion in section A.1.2 of our paper. I will try to release a Colab demo as soon as possible.
I think it's because of this and that. It seems like the Colab system is mistakenly detecting the Fooocus files in this repo as a web UI, causing it to interrupt without providing any information, which is confusing.
I can also confirm that Colab's Nvidia T4 with 16GB of memory is still not enough to run this, even if it doesn't get interrupted (it requires about 17GB of memory). Therefore, some memory optimization will need to be included in the code.
Some suggestions (even though I don't know much about coding):
Find a smaller model than juggernautXL_v8Rundiffusion.safetensors
Try CPU offloading like pipe.enable_model_cpu_offload()
How much VRAM is needed to run, why can't I use Google Colab to run it? It doesn't report any errors, interrupts directly, and there are no issues with video memory or memory overflow
The text was updated successfully, but these errors were encountered: