Failed eval

#125
by KnutJaegersberg - opened

Hi I see I'm not the only one, i've submitted 2 models in the recent past, both failed. one of them:

https://huggingface.co/datasets/open-llm-leaderboard/requests/commit/96f20d6390d092336c366127570e79955955355f

oh the problem in this case was on my side sorry... a file seems to be missing. it's just that another submission didn't work either.

KnutJaegersberg changed discussion status to closed

hmm... the other one should still be interesting. I have the suspicion it's because I used a modified llama3 model.

https://huggingface.co/datasets/open-llm-leaderboard/requests/commit/73a39439d2a74acceec14d904ac7ef0f5f2840c3

I didn't use the regular llama3-8b, but this one:

https://huggingface.co/imone/Llama-3-8B-fixed-special-embedding

I would not think that this has an impact, it's just the only thing I know which is different.

KnutJaegersberg changed discussion status to open

Hi @KnutJaegersberg ,

I resubmitted this your model. Please, open a discussion here if you need help with other models

alozowski changed discussion status to closed

I think I've now had like 4 models in a row where the eval didn't work, with different architectures. It's weird.

https://huggingface.co/datasets/open-llm-leaderboard/requests/commit/de66a17b0866fcefa8d8d4c770f45e7ad87c938c

Hi @KnutJaegersberg ,

Could you please send a list of the requests files of all your models that failed? I'll check the logs and resubmit then

alozowski changed discussion status to open

Sign up or log in to comment