NeMo

Gguf

#5
by iHaag - opened

Can you run this with llama.cpp?

NVIDIA org

Probably not at this time -- I did a quick search and it doesn't seem that llama.cpp supports NeMo models.

laugh
out
loud

Yes you can, at least with my branch. Check this out for details: https://github.com/ggerganov/llama.cpp/issues/7966#issuecomment-2227104693

Sign up or log in to comment