Error loading model in llama.cpp ?

#1
by ubergarm - opened

Was this quantized with iamlemec's 6515e787 commit ?

Just failed running it on llama.cpp 69c487 as well as above commit.

llama_model_load: error loading model: check_tensor_dims: tensor 'blk.0.attn_q.weight' has wrong shape; expected  5120,  5120, got  5120,  4096,     1,     1
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model '../models/QuantFactory/Mistral-Nemo-Instruct-2407-GGUF/Mistral-Nemo-Instruct-2407.Q8_0.gguf'
$ cd llama.cpp
$ git pull
$ git remote add iamlemec git@github.com:iamlemec/llama.cpp.git
$ git cherry-pick 6515e787d10095d439228f2
$ git log --pretty=oneline | head -n 5
7c9f8d3c3775c38cb014285752ea88319d5275f8 mistral nemo inference support
69c487f4ed57bb4d4514a1b7ff12608d5a8e7ef0 CUDA: MMQ code deduplication + iquant support (#8495)
07283b1a90e1320aae4762c7e03c879043910252 gguf : handle null name during init (#8587)
940362224d20e35f13aa5fd34a0d937ae57bdf7d llama : add support for Tekken pre-tokenizer (#8579)
69b9945b44c3057ec17cb556994cd36060455d44 llama.swiftui: fix end of generation bug (#8268)
$ make clean && time GGML_CUDA=1 make -j$(nproc)
<error from above>

I suppose I could just wait a bit for everything to catch up, hah.. Thanks for any tips.

I got this one working in llama.cpp CompendiumLabs/mistral-nemo-instruct-2407-gguf

Quant Factory org

@ubergarm converted these using imalemec's forck, can you confirm if these work for you
QuantFactory/Mistral-Nemo-Instruct-2407-GGUF-iamlemec

@munish0838 huh I downloaded the new q8_0 and then noticed it has the same sha256sum as the existing one in this repo...

Mistral-Nemo-Instruct-2407-GGUF
Mistral-Nemo-Instruct-2407-GGUF-iamlemec

Same is true for a couple other quants I checked. The sha256sum of your q5_0 does not match that of iamlemec's CompendiumLabs one (if quantizing is reproducable deterministic?). Huh, MaziyarPanahi hasn't uploaded yet either, I saw him mentioning a bug in one of the GH Issue threads... I am not sure how iamlemec did it then if it wasn't that fork?? hrmm... i'll have to look closer at the diff, maybe there is a new argument needed ???

In the mean time, I'll try your q5_0 and see if that one works by some random chance... It is a puzzle to me!

No dice... The q5_0 does not load either. same error above.

Did a little more research, seems like iamlemec's PR is not done yet, so still missing a piece to handle the different sized head_dim i guess?
https://huggingface.co/MaziyarPanahi/Mistral-Nemo-Instruct-2407-GGUF/discussions/1#669c29f5aa500cd99d7259e4

Quant Factory org

Thanks @ubergarm ,I will update them as soon as their is a complete fix

Quant Factory org
edited Jul 22

PR https://github.com/ggerganov/llama.cpp/pull/8604 is merged, updating quants

PR https://github.com/ggerganov/llama.cpp/pull/8604 is merged, updating quants

Beautiful. Pulled latest llama.cpp, make clean, make. And then downloaded quants from here. Amazing coherence! Thanks a million.

aashish1904 changed discussion status to closed

The new Q8_0 works like a charm! Thanks and great job!

Sign up or log in to comment