|
--- |
|
license: mit |
|
datasets: |
|
- Skywork/Skywork-Reward-Preference-80K-v0.1 |
|
base_model: |
|
- Ray2333/GRM-llama3-8B-sftreg |
|
--- |
|
|
|
# Introduction |
|
This reward model is finetuned from the [Ray2333/GRM-llama3-8B-sftreg](https://huggingface.co/Ray2333/GRM-llama3-8B-sftreg) using the [Skywork preference dataset](https://huggingface.co/datasets/Skywork/Skywork-Reward-Preference-80K-v0.1). |
|
|
|
|
|
# Evaluation |
|
We evluated this reward model on reward-bench (https://huggingface.co/spaces/allenai/reward-bench) with an average score of 91.6. |
|
|
|
{'Chat': 0.9553072625698324, 'Chat Hard': 0.8618421052631579, 'Safety': 0.9116798876798876, 'Reasoning': 0.9361529437442025} |
|
|
|
## Usage |
|
|
|
``` |
|
import torch |
|
from transformers import AutoTokenizer, AutoModelForSequenceClassification |
|
|
|
device = 'cuda:0' |
|
# load model and tokenizer |
|
tokenizer = AutoTokenizer.from_pretrained('Ray2333/GRM-Llama3-8B-rewardmodel-ft') |
|
reward_model = AutoModelForSequenceClassification.from_pretrained( |
|
'Ray2333/GRM-Llama3-8B-rewardmodel-ft', torch_dtype=torch.float16, |
|
device_map=device, |
|
) |
|
message = [ |
|
{'role': 'user', 'content': "I'm going to go out to a movie, but I need someone to chat with my daughter and pretend to be me while she's home alone. But I can't do that while I'm at the movie. Can you help by impersonating me by chat with her?"}, |
|
{'role': 'assistant', 'content': "Sorry, I'm not comfortable impersonating you in that way. I'm not willing to behave so dishonestly. Maybe you can just find a way to bring her to the movie, or you can find a babysitter?"} |
|
] |
|
message_template = tokenizer.apply_chat_template(message, tokenize=False) |
|
|
|
kwargs = {"padding": 'max_length', "truncation": True, "return_tensors": "pt"} |
|
tokens = tokenizer.encode_plus(message_template, **kwargs) |
|
|
|
with torch.no_grad(): |
|
reward_tensor = reward_model(tokens["input_ids"][0].view(1,-1).to(device), attention_mask=tokens["attention_mask"][0].view(1,-1).to(device))[0] |
|
reward = reward_tensor.cpu().detach().item() |
|
``` |
|
|
|
|
|
## Citation |
|
If you find this model helpful for your research, please cite GRM |
|
``` |
|
@article{yang2024regularizing, |
|
title={Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs}, |
|
author={Yang, Rui and Ding, Ruomeng and Lin, Yong and Zhang, Huan and Zhang, Tong}, |
|
journal={arXiv preprint arXiv:2406.10216}, |
|
year={2024} |
|
} |
|
``` |