futureProofGlitch commited on
Commit
9747723
1 Parent(s): 08fef17

End of training

Browse files
Files changed (2) hide show
  1. README.md +9 -10
  2. model.safetensors +1 -1
README.md CHANGED
@@ -21,7 +21,7 @@ model-index:
21
  metrics:
22
  - name: Wer
23
  type: wer
24
- value: 0.06265245859545512
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,9 +31,9 @@ should probably proofread and complete it, then remove this comment. -->
31
 
32
  This model is a fine-tuned version of [futureProofGlitch/whisper-small-v2](https://huggingface.co/futureProofGlitch/whisper-small-v2) on the TBK's Treasured Lectures dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 0.3506
35
- - Wer Ortho: 0.1905
36
- - Wer: 0.0627
37
 
38
  ## Model description
39
 
@@ -59,18 +59,17 @@ The following hyperparameters were used during training:
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: constant_with_warmup
61
  - lr_scheduler_warmup_steps: 10
62
- - training_steps: 125
63
  - mixed_precision_training: Native AMP
64
 
65
  ### Training results
66
 
67
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
68
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
69
- | No log | 0.21 | 25 | 0.8342 | 0.2376 | 0.0939 |
70
- | 3.0713 | 0.42 | 50 | 0.4416 | 0.2100 | 0.0648 |
71
- | 3.0713 | 0.64 | 75 | 0.3754 | 0.1860 | 0.0556 |
72
- | 0.3128 | 0.85 | 100 | 0.3574 | 0.1831 | 0.0561 |
73
- | 0.3128 | 1.06 | 125 | 0.3506 | 0.1905 | 0.0627 |
74
 
75
 
76
  ### Framework versions
 
21
  metrics:
22
  - name: Wer
23
  type: wer
24
+ value: 0.056233149313133904
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  This model is a fine-tuned version of [futureProofGlitch/whisper-small-v2](https://huggingface.co/futureProofGlitch/whisper-small-v2) on the TBK's Treasured Lectures dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 0.3574
35
+ - Wer Ortho: 0.1834
36
+ - Wer: 0.0562
37
 
38
  ## Model description
39
 
 
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: constant_with_warmup
61
  - lr_scheduler_warmup_steps: 10
62
+ - training_steps: 100
63
  - mixed_precision_training: Native AMP
64
 
65
  ### Training results
66
 
67
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
68
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
69
+ | No log | 0.21 | 25 | 0.8342 | 0.2377 | 0.0939 |
70
+ | 3.0694 | 0.42 | 50 | 0.4413 | 0.2100 | 0.0651 |
71
+ | 3.0694 | 0.64 | 75 | 0.3754 | 0.1859 | 0.0557 |
72
+ | 0.3126 | 0.85 | 100 | 0.3574 | 0.1834 | 0.0562 |
 
73
 
74
 
75
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4a76e3a205c424683c267de97b43b66583f5f36d07e2b8f43fd01042e6725daa
3
  size 966995080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42e744cb35ac5b9b9606f3219177846d62f881605ffd81da09fa131f882916b8
3
  size 966995080