Edit model card

easylm-rm-gemma-2-9b

This model is a fine-tuned version of scottsuk0306/easylm-sft-gemma-2-9b on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6317
  • Accuracy: 0.6771

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • total_eval_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.662 0.0667 10 0.7110 0.5833
0.6752 0.1333 20 0.6839 0.5938
0.682 0.2 30 0.6653 0.6875
0.678 0.2667 40 0.6628 0.7188
0.6591 0.3333 50 0.6403 0.6458
0.6591 0.4 60 0.6619 0.7083
0.6324 0.4667 70 0.6534 0.6875
0.6812 0.5333 80 0.6372 0.6667
0.6562 0.6 90 0.6301 0.6667
0.6534 0.6667 100 0.6283 0.7083
0.6479 0.7333 110 0.6286 0.6875
0.651 0.8 120 0.6281 0.6979
0.6612 0.8667 130 0.6297 0.6979
0.634 0.9333 140 0.6300 0.7083
0.6311 1.0 150 0.6317 0.6771

Framework versions

  • Transformers 4.43.3
  • Pytorch 2.3.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
9.24B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for scottsuk0306/easylm-rm-gemma-2-9b

Base model

google/gemma-2-9b
Finetuned
(1)
this model