mwz commited on
Commit
950147b
1 Parent(s): 09b3ab6

End of training

Browse files
README.md CHANGED
@@ -4,9 +4,24 @@ tags:
4
  - generated_from_trainer
5
  datasets:
6
  - common_voice_16_0
 
 
7
  model-index:
8
  - name: w2v-bert-2.0-ur
9
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -15,6 +30,9 @@ should probably proofread and complete it, then remove this comment. -->
15
  # w2v-bert-2.0-ur
16
 
17
  This model is a fine-tuned version of [ylacombe/w2v-bert-2.0](https://huggingface.co/ylacombe/w2v-bert-2.0) on the common_voice_16_0 dataset.
 
 
 
18
 
19
  ## Model description
20
 
@@ -42,9 +60,19 @@ The following hyperparameters were used during training:
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_steps: 500
45
- - num_epochs: 3
46
  - mixed_precision_training: Native AMP
47
 
 
 
 
 
 
 
 
 
 
 
48
  ### Framework versions
49
 
50
  - Transformers 4.37.0.dev0
 
4
  - generated_from_trainer
5
  datasets:
6
  - common_voice_16_0
7
+ metrics:
8
+ - wer
9
  model-index:
10
  - name: w2v-bert-2.0-ur
11
+ results:
12
+ - task:
13
+ name: Automatic Speech Recognition
14
+ type: automatic-speech-recognition
15
+ dataset:
16
+ name: common_voice_16_0
17
+ type: common_voice_16_0
18
+ config: ur
19
+ split: test
20
+ args: ur
21
+ metrics:
22
+ - name: Wer
23
+ type: wer
24
+ value: 0.2984838198687486
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
  # w2v-bert-2.0-ur
31
 
32
  This model is a fine-tuned version of [ylacombe/w2v-bert-2.0](https://huggingface.co/ylacombe/w2v-bert-2.0) on the common_voice_16_0 dataset.
33
+ It achieves the following results on the evaluation set:
34
+ - Loss: inf
35
+ - Wer: 0.2985
36
 
37
  ## Model description
38
 
 
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: linear
62
  - lr_scheduler_warmup_steps: 500
63
+ - num_epochs: 10
64
  - mixed_precision_training: Native AMP
65
 
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
69
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
70
+ | 0.2789 | 2.4 | 300 | inf | 0.3200 |
71
+ | 0.2724 | 4.8 | 600 | inf | 0.3320 |
72
+ | 0.1912 | 7.2 | 900 | inf | 0.2935 |
73
+ | 0.0931 | 9.6 | 1200 | inf | 0.2985 |
74
+
75
+
76
  ### Framework versions
77
 
78
  - Transformers 4.37.0.dev0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:58d541106119fced53b3fa1eecdf33e307b09f99e8a692e16a92118b642bbf5d
3
  size 2423158960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76f8d2e202d964eadbb8747397127f6cbf0d29730ea15be2d761c44637955c96
3
  size 2423158960
runs/Jan25_09-23-25_42b77e0ec4d9/events.out.tfevents.1706174648.42b77e0ec4d9.739.1 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:75083a2059319249b4161536d93e3996ff3ccda904534890330075091e0c29ee
3
- size 6564
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5913080138c6e5cec44b82180daa757de4bd35d561f7130980d85798faa20a3
3
+ size 7868