stduhpf's picture
Update README.md
934e466 verified
metadata
license: other
license_name: sacla
license_link: >-
  https://huggingface.co/stabilityai/stable-diffusion-3.5-large-turbo/blob/main/LICENSE.md
base_model:
  - stabilityai/stable-diffusion-3.5-large-turbo
base_model_relation: quantized

Overview

These models are made to work with stable-diffusion.cpp release master-ac54e00 onwards. Support for other inference backends is not guarenteed.

Quantized using this PR https://github.com/leejet/stable-diffusion.cpp/pull/447

Normal K-quants are not working properly with SD3.5-Large models because around 90% of the weights are in tensors whose shape doesn't match the 256 superblock size of K-quants and therefore can't be quantized this way. Mixing quantization types allows us to take adventage of the better fidelity of k-quants to some extent while keeping the model file size relatively small.

Files:

Mixed Types:

Legacy types:

  • sd3.5_large_turbo-q4_0.gguf: Same size as q4_k_4_0, Not recommended (use q4_k_4_0 instead)
  • (I wanted to upload more, but it's not working anymore, maybe i hit a rate limit)

Outputs:

Name Image Image Image
q2_k_4_0 q2_k_4_0 q2_k_4_0 q2_k_4_0
q3_k_4_0 q3_k_4_0 q3_k_4_0 q3_k_4_0
q4_0 q4_0 q4_0 q4_0
q4_k_4_0 q4_k_4_0 q4_k_4_0 q4_k_4_0
q4_k_4_1 q4_k_4_1 q4_k_4_1 q4_k_4_1
q4_1 q4_1 q4_1 q4_1
q4_k_5_0 q4_k_5_0 q4_k_5_0 q4_k_5_0
q5_0 q5_0 q5_0 q5_0
q8_0 q8_0 q8_0 q8_0
f16(sft) f16 f16 f16