KarthikeyaSKGP commited on
Commit
c708df5
1 Parent(s): fa190d7

Training in progress, epoch 0

Browse files
Files changed (3) hide show
  1. README.md +27 -0
  2. adapter_model.bin +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -6,6 +6,7 @@ tags:
6
  model-index:
7
  - name: know_sql
8
  results: []
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -29,6 +30,30 @@ More information needed
29
 
30
  ## Training procedure
31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  ### Training hyperparameters
33
 
34
  The following hyperparameters were used during training:
@@ -46,6 +71,8 @@ The following hyperparameters were used during training:
46
 
47
  ### Framework versions
48
 
 
 
49
  - Transformers 4.34.1
50
  - Pytorch 2.1.0+cu118
51
  - Datasets 2.14.6
 
6
  model-index:
7
  - name: know_sql
8
  results: []
9
+ library_name: peft
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
 
31
  ## Training procedure
32
 
33
+
34
+ The following `bitsandbytes` quantization config was used during training:
35
+ - quant_method: bitsandbytes
36
+ - load_in_8bit: False
37
+ - load_in_4bit: True
38
+ - llm_int8_threshold: 6.0
39
+ - llm_int8_skip_modules: None
40
+ - llm_int8_enable_fp32_cpu_offload: False
41
+ - llm_int8_has_fp16_weight: False
42
+ - bnb_4bit_quant_type: nf4
43
+ - bnb_4bit_use_double_quant: True
44
+ - bnb_4bit_compute_dtype: float16
45
+
46
+ The following `bitsandbytes` quantization config was used during training:
47
+ - quant_method: bitsandbytes
48
+ - load_in_8bit: False
49
+ - load_in_4bit: True
50
+ - llm_int8_threshold: 6.0
51
+ - llm_int8_skip_modules: None
52
+ - llm_int8_enable_fp32_cpu_offload: False
53
+ - llm_int8_has_fp16_weight: False
54
+ - bnb_4bit_quant_type: nf4
55
+ - bnb_4bit_use_double_quant: True
56
+ - bnb_4bit_compute_dtype: float16
57
  ### Training hyperparameters
58
 
59
  The following hyperparameters were used during training:
 
71
 
72
  ### Framework versions
73
 
74
+ - PEFT 0.5.0
75
+ - PEFT 0.5.0
76
  - Transformers 4.34.1
77
  - Pytorch 2.1.0+cu118
78
  - Datasets 2.14.6
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:62e7d6b04366433cba2a54a98767f6211e3ba1960262d7c80a098813ada337ed
3
  size 18908110
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0cc2ee36e2859db169c9f4ee0a8f7b901c1e1b2ebd39c9f96f25fe8951b61dd1
3
  size 18908110
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bc027d243a490995cb0d616f7a3e0386e95bbab297d1a5f588a3ac43cef384ae
3
  size 4536
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3b701b90c0b851b177b248c889e5169ac3751f95b5410c155004a9e7886c3d4
3
  size 4536