wasertech's picture
Update README.md
9962660 verified
metadata
license: apache-2.0
library_name: peft
tags:
  - trl
  - sft
  - generated_from_trainer
base_model: cognitivecomputations/dolphin-2.2.1-mistral-7b
model-index:
  - name: assistant-dolphin-2.2.1-mistral-7b-e1-qlora
    results: []
datasets:
  - wasertech/OneOS

Assistant Dolphin 2.2.1 Mistral 7B (1 epoch) QLoRA

This model is a 1 epoch fine-tuned version of cognitivecomputations/dolphin-2.2.1-mistral-7b on the OneOS dataset.

Model description

Assistant Dolphin 2.2.1 Mistral 7B is a fine-tuned version of the cognitivecomputations/dolphin-2.2.1-mistral-7b model on the OneOS dataset for an epoch.

Intended uses & limitations

This model is intended to be used in natural language processing systems to improve text understanding and generation. Specific limitations will depend on the training and evaluation data.

Training and evaluation data

The model was trained on the OneOS dataset.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1.41e-05
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 2
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Framework versions

  • PEFT 0.7.2.dev0
  • Transformers 4.37.0.dev0
  • Pytorch 2.1.2+cu121
  • Datasets 2.16.2.dev0
  • Tokenizers 0.15.0