Edit model card

Description

This repo contains GGUF format model files for cloudyu/Yi-34Bx2-MoE-60B.

About GGUF

GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.

How to run GGUF with llama.cpp on an A10 (24G vram)

   git clone https://github.com/ggerganov/llama.cpp.git
   cd llama.cpp/
   make LLAMA_CUBLAS=1
   ./main --model ./cloudyu_Yi-34Bx2-MoE-60B_Q3_K_XS.gguf -p "what is biggest animal?" -i -ngl 36
Downloads last month
63
GGUF
Model size
60.8B params
Architecture
llama

3-bit

4-bit

Inference API
Unable to determine this model's library. Check the docs .