Aw hell nah

#3
by ChuckMcSneed - opened

I can test it in q4 or q3, but why would I? Self-merge optimum is around original_model*1.7, this exceeds it by a lot. And I don't even like llamas 3 or 3.1, their performance on my use cases is bad.

Stop fat-shaming BigLlama

Sign up or log in to comment