--- license: other library_name: peft tags: - generated_from_trainer base_model: google/gemma-7b-it model-index: - name: out results: [] pipeline_tag: text-generation --- [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
See axolotl config axolotl version: `0.4.0` ```yaml # use google/gemma-7b if you have access base_model: google/gemma-7b-it model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: true strict: false # huggingface repo datasets: - path: ./python-oasst/chunk_1.jsonl type: oasst val_set_size: 0.1 output_dir: ./out adapter: qlora lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true sequence_len: 4096 sample_packing: false pad_to_sequence_len: true wandb_project: gemma-7b-it wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 6 micro_batch_size: 4 num_epochs: 4 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: true group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_ratio: 0.1 evals_per_epoch: 4 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: deepspeed_configs/zero1.json weight_decay: 0.0 fsdp: fsdp_config: special_tokens: ```

# out This model is a fine-tuned version of [google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1905 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 6 - total_train_batch_size: 96 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 9 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.0474 | 0.01 | 1 | 5.9279 | | 1.221 | 0.26 | 24 | 1.2960 | | 1.1167 | 0.51 | 48 | 1.1657 | | 1.0702 | 0.77 | 72 | 1.1372 | | 0.9553 | 1.02 | 96 | 1.1292 | | 0.9294 | 1.28 | 120 | 1.1301 | | 0.9603 | 1.54 | 144 | 1.1254 | | 0.8544 | 1.79 | 168 | 1.1276 | | 0.826 | 2.05 | 192 | 1.1462 | | 0.816 | 2.31 | 216 | 1.1500 | | 0.7392 | 2.56 | 240 | 1.1446 | | 0.7597 | 2.82 | 264 | 1.1469 | | 0.6664 | 3.07 | 288 | 1.1908 | | 0.6968 | 3.33 | 312 | 1.1842 | | 0.7327 | 3.59 | 336 | 1.1899 | | 0.7211 | 3.84 | 360 | 1.1905 | ### Framework versions - PEFT 0.9.0 - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu118 - Datasets 2.18.0 - Tokenizers 0.15.0