Dev-SriramB commited on
Commit
c404bcc
1 Parent(s): 575a086

Dev-SriramB/DPB_buster

Browse files
Files changed (3) hide show
  1. README.md +11 -19
  2. adapter_model.safetensors +1 -1
  3. training_args.bin +2 -2
README.md CHANGED
@@ -1,9 +1,9 @@
1
  ---
2
- license: apache-2.0
3
  library_name: peft
 
4
  tags:
5
  - generated_from_trainer
6
- base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
7
  model-index:
8
  - name: balagpt-ft2
9
  results: []
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.6050
20
 
21
  ## Model description
22
 
@@ -44,29 +44,21 @@ The following hyperparameters were used during training:
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 2
47
- - num_epochs: 10
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
- | 2.3314 | 1.0 | 5 | 1.9312 |
55
- | 1.7381 | 2.0 | 10 | 1.4899 |
56
- | 1.3108 | 3.0 | 15 | 1.1581 |
57
- | 0.9634 | 4.0 | 20 | 0.9015 |
58
- | 0.718 | 5.0 | 25 | 0.7503 |
59
- | 0.5738 | 6.0 | 30 | 0.6664 |
60
- | 0.4784 | 7.0 | 35 | 0.6163 |
61
- | 0.4266 | 8.0 | 40 | 0.6102 |
62
- | 0.4016 | 9.0 | 45 | 0.6052 |
63
- | 0.3864 | 10.0 | 50 | 0.6050 |
64
 
65
 
66
  ### Framework versions
67
 
68
- - PEFT 0.10.0
69
- - Transformers 4.39.3
70
- - Pytorch 2.1.0+cu121
71
- - Datasets 2.19.0
72
- - Tokenizers 0.15.2
 
1
  ---
2
+ base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
3
  library_name: peft
4
+ license: apache-2.0
5
  tags:
6
  - generated_from_trainer
 
7
  model-index:
8
  - name: balagpt-ft2
9
  results: []
 
16
 
17
  This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.7072
20
 
21
  ## Model description
22
 
 
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 2
47
+ - num_epochs: 2
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
+ | 0.5677 | 1.0 | 175 | 0.6737 |
55
+ | 0.2266 | 2.0 | 350 | 0.7072 |
 
 
 
 
 
 
 
 
56
 
57
 
58
  ### Framework versions
59
 
60
+ - PEFT 0.13.0
61
+ - Transformers 4.44.2
62
+ - Pytorch 2.4.1+cu121
63
+ - Datasets 3.0.1
64
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a5064f0c517c83635d202bde21aaed816bebf91bac61a6c1e509894259c547b5
3
  size 8397056
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1159a2c2cf19369defcc228607f5dd60018c868bfccb96b407aa2458afdb77db
3
  size 8397056
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e0e628017ce5b7595994d35d37ee0a2e8bb5d8bbb9cb0cfa9cf0b477bde014f4
3
- size 4920
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03547ca9e6158c53cf938382db71ead0e4656fc78145f6868a1c06a0b729289b
3
+ size 5176