ahmedgongi commited on
Commit
2c8768f
1 Parent(s): 1eb2d16

ahmedgongi/codeLlama_instruct_devops_expert1

Browse files
Files changed (1) hide show
  1. README.md +8 -6
README.md CHANGED
@@ -1,9 +1,9 @@
1
  ---
2
- license: apache-2.0
3
  library_name: peft
4
  tags:
5
  - generated_from_trainer
6
- base_model: mistralai/Mistral-7B-Instruct-v0.2
7
  model-index:
8
  - name: mistral_instruct
9
  results: []
@@ -14,9 +14,9 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # mistral_instruct
16
 
17
- This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 5.0127
20
 
21
  ## Model description
22
 
@@ -44,14 +44,16 @@ The following hyperparameters were used during training:
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 2
47
- - num_epochs: 60
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
- | 0.2278 | 60.0 | 300 | 5.0127 |
 
 
55
 
56
 
57
  ### Framework versions
 
1
  ---
2
+ license: llama2
3
  library_name: peft
4
  tags:
5
  - generated_from_trainer
6
+ base_model: codellama/CodeLlama-7b-Instruct-hf
7
  model-index:
8
  - name: mistral_instruct
9
  results: []
 
14
 
15
  # mistral_instruct
16
 
17
+ This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 1.6038
20
 
21
  ## Model description
22
 
 
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 2
47
+ - num_epochs: 3
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
+ | 1.6251 | 1.0 | 361 | 1.5966 |
55
+ | 1.5575 | 2.0 | 723 | 1.5952 |
56
+ | 1.5094 | 3.0 | 1083 | 1.6038 |
57
 
58
 
59
  ### Framework versions