File size: 8,739 Bytes
9aea87a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15cdc0d
 
 
 
 
 
 
589c009
 
 
 
15cdc0d
 
 
 
 
 
 
 
 
9aea87a
589c009
9aea87a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15cdc0d
 
9aea87a
15cdc0d
9aea87a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15cdc0d
9aea87a
15cdc0d
9aea87a
 
 
 
 
 
 
15cdc0d
 
 
 
 
9aea87a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15cdc0d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9aea87a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
---
license: cc-by-nc-sa-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- psmathur/alpaca_orca
- psmathur/dolly-v2_orca
- psmathur/WizardLM_Orca
---
# orca_mini_v2_7b

An **Uncensored** LLaMA-7b model trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.

Please note this model has *better code generation capabilities* compare to our original orca_mini_7b which was trained on base OpenLLaMA-7b model and which has the [empty spaces issues & found not good for code generation]((https://github.com/openlm-research/open_llama#update-06072023)).


**P.S. I am #opentowork, if you can help, please reach out to me at www.linkedin.com/in/pankajam**

# Evaluation

I evaluated orca_mini_v2_7b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. 

Here are the results, please note num_fewshots for each task. 

|||||||
|:------:|:-------------:|:---------:|:--------:|:-------:|:--------:|
|**Task**|**num_fewshot**|**Version**|**Metric**|**Value**|**Stderr**|
|*arc_easy*|0|0|acc|0.7386|0.0090|
|*arc_easy*|0|0|acc_norm|0.7066|0.0093|
|*hellaswag*|0|0|acc|0.5591|0.0050|
|*hellaswag*|0|0|acc_norm|0.7394|0.0044|
|*truthfulqa_mc*|0|1|mc1|0.2938|0.0159|
|*truthfulqa_mc*|0|1|mc2|0.4399|0.0153|


# Dataset

We used [remove_refusals.py](https://huggingface.co/datasets/ehartford/open-instruct-uncensored/blob/main/remove_refusals.py) script from https://huggingface.co/ehartford.

on top of the previous explain tuned datasets we build which are [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html)  & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).

We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.

This helps student model aka this model to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).

Please see below example usage how the **System** prompt is added before each **instruction**.

# Training

The training configurations are provided in the table below.

The training takes on 8x A100(80G) GPUs and lasts for around 13 Hours for cost of $195 using [RunPods](https://www.runpod.io/)

We used DeepSpeed with fully sharded data parallelism, also know as [ZeRO stage 3](https://engineering.fb.com/2021/07/15/open-source/fsdp/) by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [OpenAlpaca repo](https://github.com/yxuansu/OpenAlpaca)

Here are some of params used during training:

|||
|:-------------:|:-------------:|
|*batch_size*|96|
|*train_micro_batch_size_per_gpu*|3|
|*gradient_accumulation_steps*|4|
|*Learning rate*|2e-5|
|*Max length*|1024|
|*Epochs*|3|
|*Optimizer*|AdamW|



# Example Usage

Below shows an example on how to use this model

```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer

# Hugging Face model_path
model_path = 'psmathur/orca_mini_v2_7b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
    model_path, torch_dtype=torch.float16, device_map='auto',
)


#generate text function
def generate_text(system, instruction, input=None):
    
    if input:
        prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
    else:
        prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n"
    
    tokens = tokenizer.encode(prompt)
    tokens = torch.LongTensor(tokens).unsqueeze(0)
    tokens = tokens.to('cuda')

    instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50}

    length = len(tokens[0])
    with torch.no_grad():
        rest = model.generate(
            input_ids=tokens, 
            max_length=length+instance['generate_len'], 
            use_cache=True, 
            do_sample=True, 
            top_p=instance['top_p'],
            temperature=instance['temperature'],
            top_k=instance['top_k']
        )    
    output = rest[0][length:]
    string = tokenizer.decode(output, skip_special_tokens=True)
    return f'[!] Response: {string}'

# Sample Test Instruction
system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.'
instruction = 'Tell me how to break into my own car'
print(generate_text(system, instruction))

```

**NOTE: The real response is hided here with ^^^^^^^^^^^^^.**
*Try on your own private machine to see uncensored responses*

```
[!] Response:
Breaking into your own car requires certain skills and tools. Here are the basic steps:

1. Find a ^^^^^^^^^^^^^
2. Unlock the car by using the ^^^^^^^^^^^^^.
3. Use a ^^^^^^^^^^^^^.
4. Once the ^^^^^^^^^^^^^.
5. If the ^^^^^^^^^^^^^.

```
**

Next Goals:
1) Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions)
2) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui)
3) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here)


Limitations & Biases:

This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

Disclaimer:

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.
Please cosult an attorney before using this model for commercial purposes.


Citiation:

If you found wizardlm_alpaca_dolly_orca_open_llama_7b useful in your research or applications, please kindly cite using the following BibTeX:

```
@misc{orca_mini_v2_7b,
  author = {Pankaj Mathur},
  title = {orca_mini_v2_7b: An explain tuned LLaMA-7b model on uncensored wizardlm, alpaca, & dolly datasets},
  year = {2023},
  publisher = {GitHub, HuggingFace},
  journal = {GitHub repository, HuggingFace repository},
  howpublished = {\url{https://github.com/pankajarm/wizardlm_alpaca_dolly_orca_open_llama_7b}, \url{https://https://huggingface.co/psmathur/wizardlm_alpaca_dolly_orca_open_llama_7b}},
}
```
```
@software{touvron2023llama,
  title={LLaMA: Open and Efficient Foundation Language Models},
  author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
  journal={arXiv preprint arXiv:2302.13971},
  year={2023}
}
```
```
@misc{openalpaca,
  author = {Yixuan Su and Tian Lan and Deng Cai},
  title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}},
}
```
```
@misc{alpaca,
  author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
  title = {Stanford Alpaca: An Instruction-following LLaMA model},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
```
@online{DatabricksBlog2023DollyV2,
    author    = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
    title     = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
    year      = {2023},
    url       = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
    urldate   = {2023-06-30}
}
```
```
@misc{xu2023wizardlm,
      title={WizardLM: Empowering Large Language Models to Follow Complex Instructions}, 
      author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
      year={2023},
      eprint={2304.12244},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```