--- base_model: Qwen/Qwen2-7B-Instruct library_name: peft license: apache-2.0 tags: - generated_from_trainer model-index: - name: workspace/axolotl/vinh/Qwen_Qwen2-7B-Instruct-lora-2024-06-29-17-30-14 results: [] --- [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
See axolotl config axolotl version: `0.4.1` ```yaml base_model: Qwen/Qwen2-7B-Instruct model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: /workspace/axolotl/vinh/PAL/input_output_qwen.json type: input_output - path: /workspace/axolotl/vinh/INSTRUCT/input_output_qwen.json type: input_output dataset_prepared_path: val_set_size: 0.05 eval_sample_packing: false output_dir: /workspace/axolotl/vinh/Qwen_Qwen2-7B-Instruct-lora-2024-06-29-17-30-14 sequence_len: 2048 sample_packing: false pad_to_sequence_len: false adapter: lora lora_model_dir: lora_r: 64 lora_alpha: 128 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 128 micro_batch_size: 1 num_epochs: 3 optimizer: paged_adamw_32bit lr_scheduler: cosine learning_rate: 2e-4 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: false early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true s2_attention: loss_watchdog_threshold: 5.0 loss_watchdog_patience: 3 warmup_steps: 10 evals_per_epoch: 10 eval_table_size: eval_max_new_tokens: 512 saves_per_epoch: 2 save_total_limit: 20 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: ```

# workspace/axolotl/vinh/Qwen_Qwen2-7B-Instruct-lora-2024-06-29-17-30-14 This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0911 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 128 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5558 | 0.0063 | 1 | 0.5296 | | 0.1574 | 0.1011 | 16 | 0.1632 | | 0.1279 | 0.2023 | 32 | 0.1379 | | 0.1166 | 0.3034 | 48 | 0.1265 | | 0.1335 | 0.4045 | 64 | 0.1188 | | 0.1145 | 0.5056 | 80 | 0.1134 | | 0.1036 | 0.6068 | 96 | 0.1082 | | 0.0937 | 0.7079 | 112 | 0.1063 | | 0.0934 | 0.8090 | 128 | 0.1029 | | 0.0975 | 0.9101 | 144 | 0.1008 | | 0.0657 | 1.0113 | 160 | 0.0980 | | 0.0671 | 1.1124 | 176 | 0.0990 | | 0.0664 | 1.2135 | 192 | 0.0986 | | 0.0735 | 1.3146 | 208 | 0.0965 | | 0.0694 | 1.4158 | 224 | 0.0944 | | 0.0555 | 1.5169 | 240 | 0.0923 | | 0.0719 | 1.6180 | 256 | 0.0914 | | 0.071 | 1.7191 | 272 | 0.0894 | | 0.073 | 1.8203 | 288 | 0.0876 | | 0.0543 | 1.9214 | 304 | 0.0869 | | 0.043 | 2.0225 | 320 | 0.0866 | | 0.0333 | 2.1236 | 336 | 0.0934 | | 0.0392 | 2.2248 | 352 | 0.0924 | | 0.0453 | 2.3259 | 368 | 0.0919 | | 0.0488 | 2.4270 | 384 | 0.0920 | | 0.0361 | 2.5281 | 400 | 0.0915 | | 0.0357 | 2.6293 | 416 | 0.0912 | | 0.0364 | 2.7304 | 432 | 0.0912 | | 0.0365 | 2.8315 | 448 | 0.0912 | | 0.0338 | 2.9326 | 464 | 0.0911 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.1.2+cu118 - Datasets 2.19.1 - Tokenizers 0.19.1