--- library_name: transformers license: gemma base_model: google/gemma-7b tags: - alignment-handbook - trl - orpo - generated_from_trainer - trl - orpo - generated_from_trainer datasets: - silviasapora/low_quality_dpo7k model-index: - name: gemma-7b-borpo-low-quality results: [] --- # gemma-7b-borpo-low-quality This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the silviasapora/low_quality_dpo7k dataset. It achieves the following results on the evaluation set: - Loss: 1.5380 - Rewards/chosen: -0.0547 - Rewards/rejected: -0.0625 - Rewards/accuracies: 0.5468 - Rewards/margins: 0.0079 - Logps/rejected: -1.2508 - Logps/chosen: -1.0933 - Logits/rejected: 267.2346 - Logits/chosen: 296.6808 - Nll Loss: 1.4703 - Log Odds Ratio: -0.7039 - Log Odds Chosen: 0.2721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:--------------:|:---------------:| | 1.436 | 0.9955 | 167 | 1.4639 | -0.0502 | -0.0571 | 0.5540 | 0.0068 | -1.1413 | -1.0048 | 294.2689 | 322.9157 | 1.4152 | -0.6882 | 0.2192 | | 1.0918 | 1.9970 | 335 | 1.4233 | -0.0501 | -0.0574 | 0.4964 | 0.0073 | -1.1475 | -1.0012 | 284.8744 | 313.3100 | 1.3661 | -0.7028 | 0.2209 | | 0.576 | 2.9866 | 501 | 1.5380 | -0.0547 | -0.0625 | 0.5468 | 0.0079 | -1.2508 | -1.0933 | 267.2346 | 296.6808 | 1.4703 | -0.7039 | 0.2721 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1