--- base_model: microsoft/Phi-3-mini-4k-instruct library_name: peft license: mit tags: - axolotl - generated_from_trainer model-index: - name: phi3-deepseek-27k-cleanedplans-longtrain results: [] --- [Built with Axolotl](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config axolotl version: `0.4.1` ```yaml # model and tokenizer base_model: microsoft/Phi-3-mini-4k-instruct # change for model trust_remote_code: true sequence_len: 2048 strict: false model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer bf16: auto pad_to_sequence_len: true save_safetensors: true datasets: - path: verifiers-for-code/cleaned_deepseek_plans type: completion field: text train_on_split: train val_set_size: 0.05 # lora adapter: lora lora_r: 2048 lora_alpha: 32 lora_dropout: 0.05 lora_target_linear: true lora_modules_to_save: - embed_tokens - lm_head use_rslora: true # logging wandb_project: valeris wandb_name: phi3-deepseek-27k-cleanedplans-longtrain output_dir: ./outputs/phi3-deepseek-27k-cleanedplans-longtrain gradient_accumulation_steps: 2 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true micro_batch_size: 1 num_epochs: 3 eval_batch_size: 1 warmup_ratio: 0.05 learning_rate: 5e-6 lr_scheduler: cosine optimizer: adamw_torch hub_model_id: verifiers-for-code/phi3-deepseek-27k-cleanedplans-longtrain push_to_hub: true hub_always_push: true evals_per_epoch: 8 saves_per_epoch: 4 logging_steps: 1 # eval_table_size: 10 # eval_max_new_tokens: 512 tokens: ["", "", "", ""] special_tokens: pad_token: "<|endoftext|>" ```

# phi3-deepseek-27k-cleanedplans-longtrain This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3618 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 242 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5446 | 0.0006 | 1 | 0.4877 | | 0.4894 | 0.1255 | 203 | 0.4453 | | 0.4329 | 0.2509 | 406 | 0.3950 | | 0.4223 | 0.3764 | 609 | 0.3772 | | 0.3909 | 0.5019 | 812 | 0.3705 | | 0.3837 | 0.6273 | 1015 | 0.3676 | | 0.3959 | 0.7528 | 1218 | 0.3658 | | 0.3516 | 0.8782 | 1421 | 0.3642 | | 0.3757 | 1.0037 | 1624 | 0.3632 | | 0.3222 | 1.1292 | 1827 | 0.3627 | | 0.3095 | 1.2546 | 2030 | 0.3624 | | 0.3234 | 1.3801 | 2233 | 0.3621 | | 0.3776 | 1.5056 | 2436 | 0.3620 | | 0.3471 | 1.6310 | 2639 | 0.3618 | | 0.343 | 1.7565 | 2842 | 0.3617 | | 0.3898 | 1.8820 | 3045 | 0.3618 | | 0.3207 | 2.0074 | 3248 | 0.3618 | | 0.3486 | 2.1329 | 3451 | 0.3618 | | 0.3362 | 2.2583 | 3654 | 0.3618 | | 0.3444 | 2.3838 | 3857 | 0.3618 | | 0.3717 | 2.5093 | 4060 | 0.3618 | | 0.3482 | 2.6347 | 4263 | 0.3618 | | 0.3393 | 2.7602 | 4466 | 0.3617 | | 0.3121 | 2.8857 | 4669 | 0.3618 | ### Framework versions - PEFT 0.11.1 - Transformers 4.44.0.dev0 - Pytorch 2.4.0 - Datasets 2.19.1 - Tokenizers 0.19.1