-
Notifications
You must be signed in to change notification settings - Fork 300
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SD3.5-large (8B) support #442
Comments
It's currently supported, see #445. |
@razvanab it should work. I don't see a reason why they wouldn't be supported. |
It does nothing; it just goes to the cmd prompt again.
|
@razvanab I can confirm, it doesn't work. You'll have to quantize it yourself with sdcpp, or wait for someone else to do it and upload the models to Huggingface. |
I see. |
Ok, now I get this error. I should probably wait for someone who knows what is doing to quantize it. I have quantize t5xxl too and that get rid of some error too. But i still get a lot of error for: Nevermind, I was stupid for not getting the correct clip_g.safetensors file. Sorry about this. |
@razvanab I'm uploading some here if you want: https://huggingface.co/stduhpf/SD3.5-Large-Turbo-GGUF-mixed-sdcpp |
I did the same last night, but I forgot to post it here. |
@razvanab Btw you can find more compatible t5xxl quants and clip-l quants here: https://huggingface.co/Green-Sky/flux.1-schnell-GGUF/tree/main. |
Oh, nice t5xxl q8_0 under 6GB Thanks. |
Stable Diffusion 3.5 Large and Large Turbo just got released publicly.
https://huggingface.co/stabilityai/stable-diffusion-3.5-large
https://huggingface.co/stabilityai/stable-diffusion-3.5-large-turbo
Inference code here (warning: weird licence): https://github.com/Stability-AI/sd3.5
It's a model that should perform fairly well (SD3-Large is ranked slightly above Flux Schnell on artificialanalysis areana leaderboard, and this is an upgraded version of SD3-Large), while being smaller than Flux (it's has 8B parameters).
Right now, these two models are not supported by sdcpp (I tried).
What's required:
--clip_g
argument: Add--clip_g
argument and support split SD3 2B models (for SD3.5 support) #444Sidenote: SD3.5 Medium (2B) is also going to be released soon, hopefully it will work as a drop-in replacement for SD3 2B
Edit: About quantization, the majority of tensors in sd3.5 large do not fit nicely in a whole number of blocks of size 256, so they are skipped when trying to quantize to q3_k, q4_k and so on.
The text was updated successfully, but these errors were encountered: