dialogue-samsum / README.md
bethea's picture
bart-large-samsum
6babbc0 verified
metadata
license: apache-2.0
base_model: facebook/bart-base
tags:
  - generated_from_trainer
datasets:
  - samsum
metrics:
  - rouge
model-index:
  - name: dialogue-samsum
    results:
      - task:
          name: Sequence-to-sequence Language Modeling
          type: text2text-generation
        dataset:
          name: samsum
          type: samsum
          config: samsum
          split: validation
          args: samsum
        metrics:
          - name: Rouge1
            type: rouge
            value: 48.0133

dialogue-samsum

This model is a fine-tuned version of facebook/bart-base on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3249
  • Rouge1: 48.0133
  • Rouge2: 24.9057
  • Rougel: 40.6842
  • Rougelsum: 40.6602
  • Gen Len: 18.2384

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
0.3968 0.9997 1841 0.3374 47.4452 24.2213 40.0832 40.024 18.3875
0.3432 2.0 3683 0.3270 47.721 24.8189 40.4846 40.4736 18.143
0.324 2.9992 5523 0.3249 48.0133 24.9057 40.6842 40.6602 18.2384

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1