Edit model card
  1. mixtao-7bx2-moe-v8.1.Q4_K_M-v2.gguf is update the gguf filetype(zhengr/MixTAO-7Bx2-MoE-v8.1-GGUF) to current version if older version is now unsupported

./quantize ./models/mymodel/ggml-model-Q4_K_M.gguf ./models/mymodel/ggml-model-Q4_K_M-v2.gguf COPY

  1. mixtao-7bx2-moe-v8.1.Q4_K_M.gguf was quantized with last llama.cpp (build = 2866 (b1f8af18) )
Downloads last month
68
GGUF
Model size
12.9B params
Architecture
llama

4-bit

5-bit

Unable to determine this model's library. Check the docs .