princeton-nlp commited on
Commit
d64181d
1 Parent(s): dfefcd9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +107 -33
README.md CHANGED
@@ -10,52 +10,126 @@ model-index:
10
  results: []
11
  ---
12
 
13
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
- should probably proofread and complete it, then remove this comment. -->
15
 
16
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](None)
17
- # gemma2-9b-simpo-beta-10-ratio-0.4-lr-8e-7
18
 
19
- This model is a fine-tuned version of [/scratch/gpfs/DANQIC/ym0081/hf_cache/gemma-2-9b-it](https://huggingface.co//scratch/gpfs/DANQIC/ym0081/hf_cache/gemma-2-9b-it) on the /scratch/gpfs/DANQIC/ym0081/hf_cache/gemma2-ultrafeedback-armorm/dataset_dict/ dataset.
20
 
21
- ## Model description
22
 
23
- More information needed
24
 
25
- ## Intended uses & limitations
26
 
27
- More information needed
 
 
 
28
 
29
- ## Training and evaluation data
30
 
31
- More information needed
32
 
33
- ## Training procedure
 
 
34
 
35
- ### Training hyperparameters
36
 
37
- The following hyperparameters were used during training:
38
- - learning_rate: 8e-07
39
- - train_batch_size: 2
40
- - eval_batch_size: 4
41
- - seed: 42
42
- - distributed_type: multi-GPU
43
- - num_devices: 8
44
- - gradient_accumulation_steps: 8
45
- - total_train_batch_size: 128
46
- - total_eval_batch_size: 32
47
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
- - lr_scheduler_type: cosine
49
- - lr_scheduler_warmup_ratio: 0.1
50
- - num_epochs: 1
51
 
52
- ### Training results
53
 
 
 
 
 
 
 
 
 
 
54
 
 
55
 
56
- ### Framework versions
57
 
58
- - Transformers 4.42.4
59
- - Pytorch 2.2.2+cu121
60
- - Datasets 2.18.0
61
- - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  results: []
11
  ---
12
 
13
+ # gemma-2-9b-it-SimPO Model Card
 
14
 
15
+ SimPO (Simple Preference Optimization) is an offline preference optimization algorithm designed to enhance the training of large language models (LLMs) with preference optimization datasets. SimPO aligns the reward function with the generation likelihood, eliminating the need for a reference model and incorporating a target reward margin to boost performance. Please refer to our [preprint](https://arxiv.org/pdf/2405.14734) and [github repo](https://github.com/princeton-nlp/SimPO) for more details.
 
16
 
 
17
 
18
+ ## Model Details
19
 
20
+ ### Model Description
21
 
22
+ We fine-tuned [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) on [princeton-nlp/gemma2-ultrafeedback-armorm](https://huggingface.co/datasets/princeton-nlp/gemma2-ultrafeedback-armorm) with the SimPO objective.
23
 
24
+ - **Developed by:** Yu Meng, Mengzhou Xia, Danqi Chen
25
+ - **Model type:** Causal Language Model
26
+ - **License:** gemma
27
+ - **Finetuned from model:** [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it)
28
 
29
+ ### Model Sources
30
 
31
+ <!-- Provide the basic links for the model. -->
32
 
33
+ - **Repository:** https://github.com/princeton-nlp/SimPO
34
+ - **Paper:** https://arxiv.org/pdf/2405.14734
35
+ - **Demo:** Soon to be alive
36
 
 
37
 
38
+ ## How to Get Started with the Model
39
+ ```
40
+ import torch
41
+ from transformers import pipeline
 
 
 
 
 
 
 
 
 
 
42
 
43
+ model_id = "princeton-nlp/gemma-2-9b-it-SimPO"
44
 
45
+ generator = pipeline(
46
+ "text-generation",
47
+ model=model_id,
48
+ model_kwargs={"torch_dtype": torch.bfloat16},
49
+ device="cuda",
50
+ )
51
+ outputs = generator([{"role": "user", "content": "What's the difference between llamas and alpacas?"}], do_sample=False, max_new_tokens=200)
52
+ print(outputs[0]['generated_text'])
53
+ ```
54
 
55
+ ## Training Details
56
 
57
+ ### Training Data
58
 
59
+ We use [princeton-nlp/gemma2-ultrafeedback-armorm](https://huggingface.co/datasets/princeton-nlp/gemma2-ultrafeedback-armorm) as the preference optimization dataset.
60
+
61
+ #### Training Hyperparameters
62
+
63
+ [TO BE FILLED LATER]
64
+
65
+ #### Speeds, Sizes, Times
66
+
67
+ Fine-tuning the [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) on [princeton-nlp/gemma2-ultrafeedback-armorm](https://huggingface.co/datasets/princeton-nlp/gemma2-ultrafeedback-armorm) takes around 100 mins to finish on 8xH100 GPUs.
68
+
69
+ ## Evaluation Results
70
+
71
+
72
+ | models | AE2 LC | AE2 WR | AE2 Length | AH | AH Length | GSM | GSM Length | MMLU | MMLU Length |
73
+ |-----------------------------------|:------:|:------:|:----------:|:----:|:---------:|:----:|:----------:|:----:|:-----------:|
74
+ | [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) | 51.1 | 38.1 | 1571 | 40.8 | 545 | 87.4 | 395 | 72.7 | 515 |
75
+ | [princeton-nlp/gemma-2-9b-it-DPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-DPO) | 67.8 | 65.4 | 2016 | 58.9 | 717 | 88.5 | 392 | 72.2 | 624 |
76
+ | [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) | 72.4 | 65.9 | 1833 | 59.1 | 693 | 88.0 | 341 | 72.2 | 441 |
77
+
78
+
79
+ ## Technical Specifications
80
+
81
+ ### Model Architecture and Objective
82
+
83
+ The model architecture is based on [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it). We use the SimPO training objective proposed in our [preprint](https://arxiv.org/pdf/2405.14734).
84
+
85
+ #### Hardware
86
+
87
+ We used 8xH100 GPUs for model training.
88
+
89
+ #### Software
90
+
91
+ Training was done using the [alignment-handbook](https://github.com/huggingface/alignment-handbook) library.
92
+
93
+ ## Citation
94
+
95
+ gemma model:
96
+ ```
97
+ @article{gemma_2024,
98
+ title={Gemma},
99
+ url={https://www.kaggle.com/m/3301},
100
+ DOI={10.34740/KAGGLE/M/3301},
101
+ publisher={Kaggle},
102
+ author={Gemma Team},
103
+ year={2024}
104
+ }
105
+ ```
106
+
107
+ SimPO paper:
108
+ ```
109
+ @article{meng2024simpo,
110
+ title={{SimPO}: Simple preference optimization with a reference-free reward},
111
+ author={Meng, Yu and Xia, Mengzhou and Chen, Danqi},
112
+ journal={arXiv preprint arXiv:2405.14734},
113
+ year={2024}
114
+ }
115
+ ```
116
+
117
+ UltraFeedback paper:
118
+ ```
119
+ @article{cui2023ultrafeedback,
120
+ title={{UltraFeedback}: Boosting language models with high-quality feedback},
121
+ author={Cui, Ganqu and Yuan, Lifan and Ding, Ning and Yao, Guanming and Zhu, Wei and Ni, Yuan and Xie, Guotong and Liu, Zhiyuan and Sun, Maosong},
122
+ journal={arXiv preprint arXiv:2310.01377},
123
+ year={2023}
124
+ }
125
+ ```
126
+
127
+ ArmoRM paper:
128
+ ```
129
+ @article{wang2024interpretable,
130
+ title={Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts},
131
+ author={Wang, Haoxiang and Xiong, Wei and Xie, Tengyang and Zhao, Han and Zhang, Tong},
132
+ journal={arXiv preprint arXiv:2406.12845},
133
+ year={2024}
134
+ }
135
+ ```