File size: 3,570 Bytes
6647bd5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5571dfe
6647bd5
 
 
 
 
 
 
 
 
5571dfe
 
6647bd5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5571dfe
 
6647bd5
 
5571dfe
6647bd5
 
5571dfe
6647bd5
 
 
5571dfe
6647bd5
 
 
5571dfe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6647bd5
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
---
license: apache-2.0
base_model: t5-3b
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: t5-3b_rte_sp0_ar0
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: glue
      type: glue
      config: rte
      split: validation
      args: rte
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.8875502008032129
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# t5-3b_rte_sp0_ar0

This model is a fine-tuned version of [t5-3b](https://huggingface.co/t5-3b) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4268
- Accuracy: 0.8876

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 1
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- training_steps: 750

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7306        | 0.18  | 25   | 0.6913          | 0.5921   |
| 0.6717        | 0.36  | 50   | 0.4976          | 0.8339   |
| 0.3978        | 0.53  | 75   | 0.5226          | 0.8628   |
| 0.322         | 0.71  | 100  | 0.3902          | 0.8484   |
| 0.2958        | 0.89  | 125  | 0.3803          | 0.8881   |
| 0.2604        | 1.07  | 150  | 0.8628          | 0.8736   |
| 0.2011        | 1.25  | 175  | 0.7780          | 0.8953   |
| 0.263         | 1.42  | 200  | 2.1533          | 0.8881   |
| 0.2032        | 1.6   | 225  | 4.7955          | 0.8917   |
| 0.2536        | 1.78  | 250  | 1.7810          | 0.8989   |
| 0.1984        | 1.96  | 275  | 0.5119          | 0.8845   |
| 0.1495        | 2.14  | 300  | 0.5128          | 0.8845   |
| 0.1275        | 2.31  | 325  | 0.8602          | 0.8628   |
| 0.0955        | 2.49  | 350  | 1.3642          | 0.8773   |
| 0.3912        | 2.67  | 375  | 1.0186          | 0.8664   |
| 0.1108        | 2.85  | 400  | 2.1450          | 0.8592   |
| 0.0726        | 3.02  | 425  | 2.6801          | 0.8809   |
| 0.0937        | 3.2   | 450  | 5.2053          | 0.8736   |
| 1.0143        | 3.38  | 475  | 3.3979          | 0.8845   |
| 0.5754        | 3.56  | 500  | 4.2786          | 0.8989   |
| 0.2928        | 3.74  | 525  | 5.6543          | 0.8917   |
| 0.5633        | 3.91  | 550  | 6.7064          | 0.8845   |
| 1.0431        | 4.09  | 575  | 4.9205          | 0.8953   |
| 0.2839        | 4.27  | 600  | 4.2344          | 0.8809   |
| 0.5464        | 4.45  | 625  | 4.9598          | 0.8809   |
| 0.0031        | 4.63  | 650  | 5.3705          | 0.8881   |
| 0.5149        | 4.8   | 675  | 4.8105          | 0.8845   |
| 0.2702        | 4.98  | 700  | 6.9958          | 0.8953   |
| 0.7503        | 5.16  | 725  | 5.4360          | 0.8881   |
| 0.2639        | 5.34  | 750  | 5.4420          | 0.8917   |


### Framework versions

- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0