These models are made to work with stable-diffusion.cpp release master-ac54e00 onwards. Support for other inference backends is not guarenteed.
Quantized using this PR https://github.com/leejet/stable-diffusion.cpp/pull/447
Files:
- sd3.5_large_turbo-q2_k_4_0.gguf: Smallest quantization yet. Use this if you can't afford anything bigger
- sd3.5_large_turbo-q3_k_4_0.gguf: Smaller than q4_0, acceptable degradation.
- sd3.5_large_turbo-q4_k_4_0.gguf: Exacty same size as q4_0, but with slightly less degradation. Recommended
- sd3.5_large_turbo-q4_k_4_1.gguf: Smaller than q4_1, and with comparable degradation. Recommended
- sd3.5_large_turbo-q4_k_5_0.gguf: Smaller than q5_0, and with comparable degradation. Recommended
Outputs:
- Downloads last month
- 0
Model tree for stduhpf/SD3.5-Large-Turbo-GGUF-mixed-sdcpp
Base model
stabilityai/stable-diffusion-3.5-large-turbo