--- license: gemma tags: - alignment-handbook - trl - dpo - generated_from_trainer - trl - dpo - generated_from_trainer base_model: tanliboy/zephyr-gemma-2-9b-sft datasets: - HuggingFaceH4/ultrafeedback_binarized model-index: - name: zephyr-gemma-2-9b-dpo-2 results: [] --- [Visualize in Weights & Biases](https://wandb.ai/tanliboy/huggingface/runs/dikk0994) # zephyr-gemma-2-9b-dpo-2 This model is a fine-tuned version of [tanliboy/zephyr-gemma-2-9b-sft](https://huggingface.co/tanliboy/zephyr-gemma-2-9b-sft) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.5628 - Rewards/chosen: -0.7292 - Rewards/rejected: -1.2825 - Rewards/accuracies: 0.6960 - Rewards/margins: 0.5533 - Logps/rejected: -1566.9301 - Logps/chosen: -1043.5624 - Logits/rejected: -14.1720 - Logits/chosen: -14.6638 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6835 | 0.2094 | 50 | 0.6815 | -0.0218 | -0.0436 | 0.6560 | 0.0218 | -328.0053 | -336.0947 | -11.6381 | -11.3403 | | 0.6243 | 0.4187 | 100 | 0.6229 | -0.5238 | -0.7528 | 0.6600 | 0.2290 | -1037.2136 | -838.1255 | -15.5098 | -15.6787 | | 0.5625 | 0.6281 | 150 | 0.5793 | -0.7186 | -1.1873 | 0.6880 | 0.4688 | -1471.7362 | -1032.8834 | -14.7746 | -15.1797 | | 0.5699 | 0.8375 | 200 | 0.5647 | -0.6443 | -1.1499 | 0.6920 | 0.5057 | -1434.3335 | -958.5825 | -14.1861 | -14.6684 | ### Framework versions - Transformers 4.43.1 - Pytorch 2.3.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_tanliboy__lambda-gemma-2-9b-dpo) | Metric |Value| |-------------------|----:| |Avg. |21.34| |IFEval (0-Shot) |45.01| |BBH (3-Shot) |35.55| |MATH Lvl 5 (4-Shot)| 0.00| |GPQA (0-shot) | 8.50| |MuSR (0-shot) | 7.94| |MMLU-PRO (5-shot) |31.02|