Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ pipeline_tag: text-generation
|
|
14 |
The language model Phi-1.5 is a Transformer with **1.3 billion** parameters. It was trained using the same data sources as [phi-1](https://huggingface.co/microsoft/phi-1), augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.
|
15 |
|
16 |
# Phi-1_5-Instruct-v0.1
|
17 |
-
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for instruction following. I used the [trl](https://huggingface.co/docs/trl/en/index) library and a single **A100 40GB** GPU during both the SFT and DPO steps.
|
18 |
|
19 |
- Supervised Fine-Tuning
|
20 |
- Used 128,000 instruction, response pairs from the [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) dataset
|
@@ -26,3 +26,70 @@ The model has underwent a post-training process that incorporates both supervise
|
|
26 |
- [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo)
|
27 |
- [jondurbin/py-dpo-v0.1](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo)
|
28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
The language model Phi-1.5 is a Transformer with **1.3 billion** parameters. It was trained using the same data sources as [phi-1](https://huggingface.co/microsoft/phi-1), augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.
|
15 |
|
16 |
# Phi-1_5-Instruct-v0.1
|
17 |
+
The model has underwent a post-training process that incorporates both **supervised fine-tuning** and **direct preference optimization** for instruction following. I used the [trl](https://huggingface.co/docs/trl/en/index) library and a single **A100 40GB** GPU during both the SFT and DPO steps.
|
18 |
|
19 |
- Supervised Fine-Tuning
|
20 |
- Used 128,000 instruction, response pairs from the [teknium/OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) dataset
|
|
|
26 |
- [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo)
|
27 |
- [jondurbin/py-dpo-v0.1](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo)
|
28 |
|
29 |
+
## How to use
|
30 |
+
### Chat Format
|
31 |
+
|
32 |
+
Given the nature of the training data, the Phi-1.5 Instruct model is best suited for prompts using the chat format as follows.
|
33 |
+
You can provide the prompt as a question with a generic template as follow:
|
34 |
+
```markdown
|
35 |
+
<|im_start|>system
|
36 |
+
You are a helpful assistant.<|im_end|>
|
37 |
+
<|im_start|>user
|
38 |
+
Question?<|im_end|>
|
39 |
+
<|im_start|>assistant
|
40 |
+
```
|
41 |
+
|
42 |
+
For example:
|
43 |
+
```markdown
|
44 |
+
<|im_start|>system
|
45 |
+
You are a helpful assistant.<|im_end|>
|
46 |
+
<|im_start|>user
|
47 |
+
How to explain Internet for a medieval knight?<|im_end|>
|
48 |
+
<|im_start|>assistant
|
49 |
+
```
|
50 |
+
where the model generates the text after `<|im_start|>assistant` . In case of few-shots prompt, the prompt can be formatted as the following:
|
51 |
+
|
52 |
+
### Sample inference code
|
53 |
+
|
54 |
+
This code snippets show how to get quickly started with running the model on a GPU:
|
55 |
+
|
56 |
+
```python
|
57 |
+
import torch
|
58 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
|
59 |
+
|
60 |
+
torch.random.manual_seed(0)
|
61 |
+
|
62 |
+
model_id = "rasyosef/Phi-1_5-Instruct-v0.1"
|
63 |
+
model = AutoModelForCausalLM.from_pretrained(
|
64 |
+
model_id,
|
65 |
+
device_map="cuda",
|
66 |
+
torch_dtype="auto"
|
67 |
+
)
|
68 |
+
|
69 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
70 |
+
|
71 |
+
messages = [
|
72 |
+
{"role": "system", "content": "You are a helpful AI assistant."},
|
73 |
+
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
|
74 |
+
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
|
75 |
+
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
|
76 |
+
]
|
77 |
+
|
78 |
+
pipe = pipeline(
|
79 |
+
"text-generation",
|
80 |
+
model=model,
|
81 |
+
tokenizer=tokenizer,
|
82 |
+
)
|
83 |
+
|
84 |
+
generation_args = {
|
85 |
+
"max_new_tokens": 500,
|
86 |
+
"return_full_text": False,
|
87 |
+
"temperature": 0.0,
|
88 |
+
"do_sample": False,
|
89 |
+
}
|
90 |
+
|
91 |
+
output = pipe(messages, **generation_args)
|
92 |
+
print(output[0]['generated_text'])
|
93 |
+
```
|
94 |
+
|
95 |
+
Note: If you want to use flash attention, call _AutoModelForCausalLM.from_pretrained()_ with _attn_implementation="flash_attention_2"_
|