Noor0 commited on
Commit
7c19a72
1 Parent(s): 826e7d1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -16
README.md CHANGED
@@ -1,12 +1,15 @@
1
  ---
2
  base_model: cardiffnlp/twitter-xlm-roberta-base-sentiment
3
- tags:
4
- - generated_from_trainer
5
  metrics:
6
  - accuracy
7
  model-index:
8
  - name: result
9
  results: []
 
 
 
 
 
10
  ---
11
  ---
12
 
@@ -15,12 +18,14 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # tmp_trainer
17
 
18
- This model was trained from scratch on an unknown dataset.
 
 
 
19
 
20
  ## Model description
21
 
22
- 0.502831
23
- 0.798512
24
 
25
  ## Intended uses & limitations
26
 
@@ -28,27 +33,34 @@ More information needed
28
 
29
  ## Training and evaluation data
30
 
31
- More information needed
 
 
 
32
 
33
  ## Training procedure
34
 
35
- Epoch Training Loss Validation Loss Accuracy
36
- 2 0.451100 0.502831 0.798512
37
- 3 0.365500 0.576118 0.795456
38
- 4 0.301900 0.625391 0.798512
39
- 5 0.246600 0.835689 0.797963
 
 
 
 
 
40
 
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
- metric_for_best_model='accuracy',
45
  learning_rate=2e-5,
46
  num_train_epochs=20,
47
  weight_decay=0.01,
48
  per_device_train_batch_size=16, # batch size per device during training
49
  per_device_eval_batch_size=16, # batch size for evaluation
50
- evaluation_strategy="epoch",
51
- save_strategy="epoch",
52
- save_total_limit = 2,
53
- load_best_model_at_end=True)
54
  ### Framework versions
 
 
 
 
 
1
  ---
2
  base_model: cardiffnlp/twitter-xlm-roberta-base-sentiment
 
 
3
  metrics:
4
  - accuracy
5
  model-index:
6
  - name: result
7
  results: []
8
+ language:
9
+ - ar
10
+ - en
11
+ library_name: transformers
12
+ pipeline_tag: text-classification
13
  ---
14
  ---
15
 
 
18
 
19
  # tmp_trainer
20
 
21
+ This model is a fine-tuned version of [cardiffnlp/twitter-xlm-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment) on the None dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 0.502831
24
+ - Accuracy: 0.798512
25
 
26
  ## Model description
27
 
28
+
 
29
 
30
  ## Intended uses & limitations
31
 
 
33
 
34
  ## Training and evaluation data
35
 
36
+
37
+ - Training set: 114,885 records
38
+ - evaluation data: 12,765 records
39
+
40
 
41
  ## Training procedure
42
 
43
+
44
+
45
+ | Training Loss | Epoch |Validation Loss | Accuracy |
46
+ |:-------------:|:-----:|:---------------:|:--------:|
47
+ | 0.4511 | 2.0 |0.502831 | 0.7985 |
48
+ | 0.3655 | 3.0 |0.576118 | 0.7954 |
49
+ | 0.3019 | 4.0 |0.625391 | 0.7985 |
50
+ | 0.2466 | 5.0 |0.835689 | 0.7979 |
51
+
52
+
53
 
54
  ### Training hyperparameters
55
 
56
  The following hyperparameters were used during training:
 
57
  learning_rate=2e-5,
58
  num_train_epochs=20,
59
  weight_decay=0.01,
60
  per_device_train_batch_size=16, # batch size per device during training
61
  per_device_eval_batch_size=16, # batch size for evaluation
 
 
 
 
62
  ### Framework versions
63
+ - Transformers 4.35.0
64
+ - Pytorch 2.0.0
65
+ - Datasets 2.11.0
66
+ - Tokenizers 0.14.1