tristayqc commited on
Commit
3b7c655
1 Parent(s): 76f616c

End of training

Browse files
Files changed (2) hide show
  1. README.md +9 -13
  2. model.safetensors +1 -1
README.md CHANGED
@@ -7,7 +7,6 @@ datasets:
7
  - common_voice_13_0
8
  metrics:
9
  - wer
10
- - cer
11
  model-index:
12
  - name: my_jp_asr_cv13_model
13
  results:
@@ -23,10 +22,7 @@ model-index:
23
  metrics:
24
  - name: Wer
25
  type: wer
26
- value: 0.9
27
- - name: Cer
28
- type: cer
29
- value: 0.2452
30
  ---
31
 
32
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -36,9 +32,9 @@ should probably proofread and complete it, then remove this comment. -->
36
 
37
  This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-japanese](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-japanese) on the common_voice_13_0 dataset.
38
  It achieves the following results on the evaluation set:
39
- - Loss: 2.2464
40
- - Cer: 0.2452
41
- - Wer: 0.9
42
 
43
  ## Model description
44
 
@@ -71,10 +67,10 @@ The following hyperparameters were used during training:
71
 
72
  ### Training results
73
 
74
- | Training Loss | Epoch | Step | Validation Loss | Cer | Wer |
75
- |:-------------:|:-----:|:----:|:---------------:|:------:|:---:|
76
- | 0.1017 | 400.0 | 1000 | 2.1846 | 0.25 | 0.8 |
77
- | 0.0553 | 800.0 | 2000 | 2.2464 | 0.2452 | 0.9 |
78
 
79
 
80
  ### Framework versions
@@ -82,4 +78,4 @@ The following hyperparameters were used during training:
82
  - Transformers 4.40.1
83
  - Pytorch 2.2.1+cu121
84
  - Datasets 2.19.0
85
- - Tokenizers 0.19.1
 
7
  - common_voice_13_0
8
  metrics:
9
  - wer
 
10
  model-index:
11
  - name: my_jp_asr_cv13_model
12
  results:
 
22
  metrics:
23
  - name: Wer
24
  type: wer
25
+ value: 0.875
 
 
 
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-japanese](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-japanese) on the common_voice_13_0 dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 3.1772
36
+ - Cer: 0.3512
37
+ - Wer: 0.875
38
 
39
  ## Model description
40
 
 
67
 
68
  ### Training results
69
 
70
+ | Training Loss | Epoch | Step | Validation Loss | Cer | Wer |
71
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:-----:|
72
+ | 0.16 | 250.0 | 1000 | 3.1440 | 0.3223 | 0.875 |
73
+ | 0.1061 | 500.0 | 2000 | 3.1772 | 0.3512 | 0.875 |
74
 
75
 
76
  ### Framework versions
 
78
  - Transformers 4.40.1
79
  - Pytorch 2.2.1+cu121
80
  - Datasets 2.19.0
81
+ - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:90055ba578f166644b8b2b044d00d52efa438c661b137d7d821a1d78cd2f7071
3
  size 1271405604
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:613a7d29d4e05bbe510322c113fd4a77a6ed288ff9204bb86690d91e81119072
3
  size 1271405604