GGUF variants of UltraCM-13b

#1
by alvarobartt HF staff - opened

Hi here! Great job on UltraFeedback and friends πŸ‘πŸ»

I decided to quantize the weights using GGUF to allow more users to use those locally within llama.cpp or their preferred LLM engine, check those at https://huggingface.co/alvarobartt/UltraCM-13B-GGUF

I saw few to no performance degradation on structured output generation, but it's true that I got better results on both UltraCM-13b as well as the GGUF quantized variants by removing the lines ### Feedback\nOverall Score: from the user prompt to the start of the assistant prompt, so that the assistant starts with ### Feedback\nOverall Score: , otherwise the prompt was not really consistent with neither of the precisions mentioned above.

Feel free to try those out!

Sign up or log in to comment