Amna100 commited on
Commit
ce0d552
1 Parent(s): 83ed497

End of training

Browse files
Files changed (2) hide show
  1. README.md +83 -0
  2. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: Amna100/PreTraining-MLM
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - precision
8
+ - recall
9
+ - f1
10
+ - accuracy
11
+ model-index:
12
+ - name: fold_10
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/lvieenf2)
20
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/fgis28rc)
21
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/9tw0vsla)
22
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/ccjl3n87)
23
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/geyuezlx)
24
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/sv9tcfx8)
25
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/9rg5cz4h)
26
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/3fdbnjrq)
27
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/l78entvo)
28
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/s3e8xbt2)
29
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/wgkbnjuf)
30
+ # fold_10
31
+
32
+ This model is a fine-tuned version of [Amna100/PreTraining-MLM](https://huggingface.co/Amna100/PreTraining-MLM) on the None dataset.
33
+ It achieves the following results on the evaluation set:
34
+ - Loss: 0.0103
35
+ - Precision: 0.7508
36
+ - Recall: 0.5791
37
+ - F1: 0.6538
38
+ - Accuracy: 0.9992
39
+ - Roc Auc: 0.9980
40
+ - Pr Auc: 0.9999
41
+
42
+ ## Model description
43
+
44
+ More information needed
45
+
46
+ ## Intended uses & limitations
47
+
48
+ More information needed
49
+
50
+ ## Training and evaluation data
51
+
52
+ More information needed
53
+
54
+ ## Training procedure
55
+
56
+ ### Training hyperparameters
57
+
58
+ The following hyperparameters were used during training:
59
+ - learning_rate: 5e-05
60
+ - train_batch_size: 5
61
+ - eval_batch_size: 5
62
+ - seed: 42
63
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
64
+ - lr_scheduler_type: linear
65
+ - num_epochs: 10
66
+
67
+ ### Training results
68
+
69
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Roc Auc | Pr Auc |
70
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------:|:------:|
71
+ | 0.0258 | 1.0 | 632 | 0.0109 | 0.7529 | 0.4745 | 0.5821 | 0.9992 | 0.9977 | 0.9999 |
72
+ | 0.0104 | 2.0 | 1264 | 0.0103 | 0.7508 | 0.5791 | 0.6538 | 0.9992 | 0.9980 | 0.9999 |
73
+ | 0.0058 | 3.0 | 1896 | 0.0116 | 0.7394 | 0.6764 | 0.7065 | 0.9993 | 0.9967 | 0.9999 |
74
+ | 0.0024 | 4.0 | 2528 | 0.0133 | 0.7740 | 0.6667 | 0.7163 | 0.9993 | 0.9956 | 0.9998 |
75
+ | 0.0013 | 5.0 | 3160 | 0.0144 | 0.7581 | 0.6861 | 0.7203 | 0.9993 | 0.9928 | 0.9998 |
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - Transformers 4.41.0.dev0
81
+ - Pytorch 2.2.1+cu121
82
+ - Datasets 2.19.1
83
+ - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b6f800d66a0b0a3b2e1d1b58da4a617ba711b4b66db3572284cf8c3e5060b8f0
3
  size 554446244
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1373f1c29b7e6773eee9dbe1f6c10546f51364a533f62c92f0ce893f750ced3d
3
  size 554446244