How to use this model with python code `ctransformers`

#5
by anhlht - opened

llm = AutoModelForCausalLM.from_pretrained(
"QuantFactory/Meta-Llama-3-8B-Instruct-GGUF", model_file="Meta-Llama-3-8B-Instruct.Q8_0.gguf", model_type="llama", gpu_layers=50)
I tried this code but got an error

u should use llamaCpp

Sign up or log in to comment