Momorami commited on
Commit
bfc16d2
1 Parent(s): da873a0

Model save

Browse files
Files changed (1) hide show
  1. README.md +10 -23
README.md CHANGED
@@ -1,28 +1,13 @@
1
  ---
2
- language:
3
- - en
4
  license: apache-2.0
5
  base_model: google/fnet-base
6
  tags:
7
  - generated_from_trainer
8
- datasets:
9
- - glue
10
  metrics:
11
  - matthews_correlation
12
  model-index:
13
  - name: fnet-base-finetuned-cola
14
- results:
15
- - task:
16
- name: Text Classification
17
- type: text-classification
18
- dataset:
19
- name: GLUE COLA
20
- type: glue
21
- args: cola
22
- metrics:
23
- - name: Matthews Correlation
24
- type: matthews_correlation
25
- value: 0.3262988329908328
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -30,10 +15,10 @@ should probably proofread and complete it, then remove this comment. -->
30
 
31
  # fnet-base-finetuned-cola
32
 
33
- This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE COLA dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.5648
36
- - Matthews Correlation: 0.3263
37
 
38
  ## Model description
39
 
@@ -58,15 +43,17 @@ The following hyperparameters were used during training:
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
- - num_epochs: 3.0
62
 
63
  ### Training results
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
66
  |:-------------:|:-----:|:----:|:---------------:|:--------------------:|
67
- | 0.6101 | 1.0 | 268 | 0.6032 | 0.1346 |
68
- | 0.5325 | 2.0 | 536 | 0.5451 | 0.3135 |
69
- | 0.428 | 3.0 | 804 | 0.5648 | 0.3263 |
 
 
70
 
71
 
72
  ### Framework versions
 
1
  ---
 
 
2
  license: apache-2.0
3
  base_model: google/fnet-base
4
  tags:
5
  - generated_from_trainer
 
 
6
  metrics:
7
  - matthews_correlation
8
  model-index:
9
  - name: fnet-base-finetuned-cola
10
+ results: []
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
15
 
16
  # fnet-base-finetuned-cola
17
 
18
+ This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.6476
21
+ - Matthews Correlation: 0.3934
22
 
23
  ## Model description
24
 
 
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
+ - num_epochs: 5.0
47
 
48
  ### Training results
49
 
50
  | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
51
  |:-------------:|:-----:|:----:|:---------------:|:--------------------:|
52
+ | 0.61 | 1.0 | 268 | 0.5818 | 0.1606 |
53
+ | 0.5265 | 2.0 | 536 | 0.5489 | 0.3415 |
54
+ | 0.4161 | 3.0 | 804 | 0.5454 | 0.3451 |
55
+ | 0.3324 | 4.0 | 1072 | 0.5746 | 0.3869 |
56
+ | 0.2657 | 5.0 | 1340 | 0.6476 | 0.3934 |
57
 
58
 
59
  ### Framework versions