rasyosef's picture
Update README.md
9282fcd verified
|
raw
history blame
3.98 kB
metadata
library_name: transformers
license: mit
base_model: microsoft/phi-1_5
datasets:
  - teknium/OpenHermes-2.5
  - HuggingFaceH4/ultrafeedback_binarized
  - argilla/distilabel-intel-orca-dpo-pairs
  - jondurbin/py-dpo-v0.1
  - argilla/distilabel-math-preference-dpo
pipeline_tag: text-generation

Phi-1.5

The language model phi-1.5 is a Transformer with 1.3 billion parameters. It was trained using the same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.

Phi-1_5-Instruct-v0.1

The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for instruction following. I used the trl library and a single A100 40GB GPU during both the SFT and DPO steps.

How to use

Chat Format

Given the nature of the training data, the Phi-1.5 Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow:

<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
Question?<|im_end|>
<|im_start|>assistant

For example:

<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
How to explain Internet for a medieval knight?<|im_end|>
<|im_start|>assistant

where the model generates the text after <|im_start|>assistant .

Sample inference code

This code snippets show how to get quickly started with running the model on a GPU:

import torch 
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline 

torch.random.manual_seed(0) 

model_id = "rasyosef/Phi-1_5-Instruct-v0.1"
model = AutoModelForCausalLM.from_pretrained( 
    model_id,  
    device_map="cuda",  
    torch_dtype="auto" 
) 

tokenizer = AutoTokenizer.from_pretrained(model_id) 

messages = [ 
    {"role": "system", "content": "You are a helpful AI assistant."}, 
    {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, 
    {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, 
    {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, 
] 

pipe = pipeline( 
    "text-generation", 
    model=model, 
    tokenizer=tokenizer, 
) 

generation_args = { 
    "max_new_tokens": 500, 
    "return_full_text": False, 
    "temperature": 0.0, 
    "do_sample": False, 
} 

output = pipe(messages, **generation_args) 
print(output[0]['generated_text'])  

Note: If you want to use flash attention, call AutoModelForCausalLM.from_pretrained() with attn_implementation="flash_attention_2"