stduhpf's picture
Update README.md
934e466 verified
---
license: other
license_name: sacla
license_link: >-
https://huggingface.co/stabilityai/stable-diffusion-3.5-large-turbo/blob/main/LICENSE.md
base_model:
- stabilityai/stable-diffusion-3.5-large-turbo
base_model_relation: quantized
---
## Overview
These models are made to work with [stable-diffusion.cpp](https://github.com/leejet/stable-diffusion.cpp) release [master-ac54e00](https://github.com/leejet/stable-diffusion.cpp/releases/tag/master-ac54e00) onwards. Support for other inference backends is not guarenteed.
Quantized using this PR https://github.com/leejet/stable-diffusion.cpp/pull/447
Normal K-quants are not working properly with SD3.5-Large models because around 90% of the weights are in tensors whose shape doesn't match the 256 superblock size of K-quants and therefore can't be quantized this way. Mixing quantization types allows us to take adventage of the better fidelity of k-quants to some extent while keeping the model file size relatively small.
## Files:
### Mixed Types:
- [sd3.5_large_turbo-q2_k_4_0.gguf](https://huggingface.co/stduhpf/SD3.5-Large-Turbo-GGUF-mixed-sdcpp/blob/main/sd3.5_large_turbo-q2_k_4_0.gguf): Smallest quantization yet. Use this if you can't afford anything bigger
- [sd3.5_large_turbo-q3_k_4_0.gguf](https://huggingface.co/stduhpf/SD3.5-Large-Turbo-GGUF-mixed-sdcpp/blob/main/sd3.5_large_turbo-q3_k_4_0.gguf): Smaller than q4_0, acceptable degradation.
- [sd3.5_large_turbo-q4_k_4_0.gguf](https://huggingface.co/stduhpf/SD3.5-Large-Turbo-GGUF-mixed-sdcpp/blob/main/sd3.5_large_turbo-q4_k_4_0.gguf): Exacty same size as q4_0, but with slightly less degradation. Recommended
- [sd3.5_large_turbo-q4_k_4_1.gguf](https://huggingface.co/stduhpf/SD3.5-Large-Turbo-GGUF-mixed-sdcpp/blob/main/sd3.5_large_turbo-q4_k_4_1.gguf): Smaller than q4_1, and with comparable degradation. Recommended
- [sd3.5_large_turbo-q4_k_5_0.gguf](https://huggingface.co/stduhpf/SD3.5-Large-Turbo-GGUF-mixed-sdcpp/blob/main/sd3.5_large_turbo-q4_k_5_0.gguf): Smaller than q5_0, and with comparable degradation. Recommended
### Legacy types:
- [sd3.5_large_turbo-q4_0.gguf](https://huggingface.co/stduhpf/SD3.5-Large-Turbo-GGUF-mixed-sdcpp/blob/main/legacy/sd3.5_large_turbo-q4_0.gguf): Same size as q4_k_4_0, Not recommended (use q4_k_4_0 instead)
- (I wanted to upload more, but it's not working anymore, maybe i hit a rate limit)
## Outputs:
| Name | Image | Image | Image |
| ------------------ | -------------------------------- | ---------------------------------- | ---------------------------------- |
| q2_k_4_0 | ![q2_k_4_0](Images/q2_k_4_0.png) | ![q2_k_4_0](Images/1_q2_k_4_0.png) | ![q2_k_4_0](Images/2_q2_k_4_0.png) |
| q3_k_4_0 | ![q3_k_4_0](Images/q3_k_4_0.png) | ![q3_k_4_0](Images/1_q3_k_4_0.png) | ![q3_k_4_0](Images/2_q3_k_4_0.png) |
| q4_0 | ![q4_0](Images/q4_0.png) | ![q4_0](Images/1_q4_0.png) | ![q4_0](Images/2_q4_0.png) |
| q4_k_4_0 | ![q4_k_4_0](Images/q4_k_4_0.png) | ![q4_k_4_0](Images/1_q4_k_4_0.png) | ![q4_k_4_0](Images/2_q4_k_4_0.png) |
| q4_k_4_1 | ![q4_k_4_1](Images/q4_k_4_1.png) | ![q4_k_4_1](Images/1_q4_k_4_1.png) | ![q4_k_4_1](Images/2_q4_k_4_1.png) |
| q4_1 | ![q4_1](Images/q4_1.png) | ![q4_1](Images/1_q4_1.png) | ![q4_1](Images/2_q4_1.png) |
| q4_k_5_0 | ![q4_k_5_0](Images/q4_k_5_0.png) | ![q4_k_5_0](Images/1_q4_k_5_0.png) | ![q4_k_5_0](Images/2_q4_k_5_0.png) |
| q5_0 | ![q5_0](Images/q5_0.png) | ![q5_0](Images/1_q5_0.png) | ![q5_0](Images/2_q5_0.png) |
| q8_0 | ![q8_0](Images/q8_0.png) | ![q8_0](Images/1_q8_0.png) | ![q8_0](Images/2_q8_0.png) |
| f16(sft) | ![f16](Images/f16.png) | ![f16](Images/1_f16.png) | ![f16](Images/2_f16.png) |