Poor quality output compared to Meta's Instruction Tune

#7
by ElliottDyson - opened

Dear cognitivecomputations, I love the work you do here, but I (and other users I have seen post about this on reddit) have noticed a significant degradation in instruction following, and in many other aspects too, such as the ability for the model to generate very human-like text.

I was wondering if perhaps it might be possible that an earlier checkpoint in training might mitigate this whilst maintaining the benefits you are attempting to achieve, or perhaps it was the original instruction tuning that made the responses so good? If that is the case, then maybe it is worth fine-tuning with a converted dataset that uses the llama3 instruction prompt format instead, perhaps with the earlier checkpoint as well?

Thank you.

It's worth noting that this is a finetune of the base version of Llama-3, not the instruction tuned version, which it sounds like you are comparing it to. So the "original" in this case had no instruction following ability at all. Because of that it would also make little difference what chat template it is trained with, as the base version has no native chat template to start with.

Understood. That would explain the poorer performance on chat related tasks compared to the instruction tune. Would love to see this done on the instruct model!😁

Thank you.

ElliottDyson changed discussion title from Poor quality output compared to original to Poor quality output compared to Meta's Instruction Tune

Due to realisation that this is not the purpose of this model from the following Reddit comment: https://www.reddit.com/r/LocalLLaMA/s/0FLP3MjLI0

I have decided to close this thread.

Thank you 😊

ElliottDyson changed discussion status to closed

Sign up or log in to comment