Edit model card

Phi-2-ORPO

Phi-2-ORPO is a fine-tuned version of microsoft/phi-2 on argilla/dpo-mix-7k preference dataset using Odds Ratio Preference Optimization (ORPO). The model has been trained for 1 epoch.

LazyORPO

This model has been trained using LazyORPO. A colab notebook that makes the training process much easier. Based on ORPO paper. This notebook has been created by Zain Ul Abideen

What is ORPO?

Odds Ratio Preference Optimization (ORPO) proposes a new method to train LLMs by combining SFT and Alignment into a new objective (loss function), achieving state of the art results. Some highlights of this techniques are:

  • ๐Ÿง  Reference model-free โ†’ memory friendly
  • ๐Ÿ”„ Replaces SFT+DPO/PPO with 1 single method (ORPO)
  • ๐Ÿ† ORPO Outperforms SFT, SFT+DPO on PHI-2, Llama 2, and Mistral
  • ๐Ÿ“Š Mistral ORPO achieves 12.20% on AlpacaEval2.0, 66.19% on IFEval, and 7.32 on MT-Bench out Hugging Face Zephyr Beta

Usage

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

torch.set_default_device("cuda")

model = AutoModelForCausalLM.from_pretrained("abideen/phi2-pro", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("abideen/phi2-pro", trust_remote_code=True)

inputs = tokenizer('''
   """
   Write a detailed analogy between mathematics and a lighthouse.
   """''', return_tensors="pt", return_attention_mask=False)

outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)

Evaluation

COMING SOON

Downloads last month
16
Safetensors
Model size
2.78B params
Tensor type
BF16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Syed-Hasan-8503/phi-2-ORPO