--- library_name: transformers tags: [] --- # gemma-2-9b-it-SimPO Model Card SimPO (Simple Preference Optimization) is an offline preference optimization algorithm designed to enhance the training of large language models (LLMs) with preference optimization datasets. SimPO aligns the reward function with the generation likelihood, eliminating the need for a reference model and incorporating a target reward margin to boost performance. Please refer to our [preprint](https://arxiv.org/pdf/2405.14734) and [github repo](https://github.com/princeton-nlp/SimPO) for more details. ## Model Details ### Model Description We fine-tuned [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) on with the SimPO objective. , a preference optimization dataset where the prompts are from [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) - **Developed by:** Yu Meng, Mengzhou Xia, Danqi Chen - **Model type:** Causal Language Model - **License:** gemma - **Finetuned from model:** [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) ### Model Sources - **Repository:** https://github.com/princeton-nlp/SimPO - **Paper:** https://arxiv.org/pdf/2405.14734 - **Demo:** Soon to be alive ## How to Get Started with the Model ``` import torch from transformers import pipeline import json import warnings model_id = "princeton-nlp/gemma-2-9b-it-SimPO" generator = pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", ) generator.tokenizer.chat_template = template outputs = generator([{"role": "user", "content": "What's the difference between llamas and alpacas?"}], do_sample=False, max_new_tokens=200) print(outputs[0]['generated_text']) ``` ## Training Details ### Training Data We use #### Training Hyperparameters - **Training regime:** [More Information Needed] #### Speeds, Sizes, Times Fine-tuning the [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) on takes around 100 mins to finish on 8xH100 GPUs. ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data [More Information Needed] #### Factors [More Information Needed] #### Metrics [More Information Needed] ### Results [More Information Needed] #### Summary ## Technical Specifications ### Model Architecture and Objective The model architecture is based on [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it). We use the SimPO training objective proposed in our [preprint](https://arxiv.org/pdf/2405.14734). #### Hardware We used 8xH100 GPUs for model training. #### Software Training was done using the [alignment-handbook](https://github.com/huggingface/alignment-handbook) library. ## Citation @article{gemma_2024, title={Gemma}, url={https://www.kaggle.com/m/3301}, DOI={10.34740/KAGGLE/M/3301}, publisher={Kaggle}, author={Gemma Team}, year={2024} } @article{meng2024simpo, title={{SimPO}: Simple preference optimization with a reference-free reward}, author={Meng, Yu and Xia, Mengzhou and Chen, Danqi}, journal={arXiv preprint arXiv:2405.14734}, year={2024} } @article{cui2023ultrafeedback, title={{UltraFeedback}: Boosting language models with high-quality feedback}, author={Cui, Ganqu and Yuan, Lifan and Ding, Ning and Yao, Guanming and Zhu, Wei and Ni, Yuan and Xie, Guotong and Liu, Zhiyuan and Sun, Maosong}, journal={arXiv preprint arXiv:2310.01377}, year={2023} } @article{wang2024interpretable, title={Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts}, author={Wang, Haoxiang and Xiong, Wei and Xie, Tengyang and Zhao, Han and Zhang, Tong}, journal={arXiv preprint arXiv:2406.12845}, year={2024} }