Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,46 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
---
|
4 |
+
|
5 |
+
# Introduction
|
6 |
+
This reward model is finetuned from the [Ray2333/GRM-llama3-8B-sftreg](https://huggingface.co/Ray2333/GRM-llama3-8B-sftreg) using the [Skywork preference dataset](https://huggingface.co/datasets/Skywork/Skywork-Reward-Preference-80K-v0.1).
|
7 |
+
|
8 |
+
|
9 |
+
## Usage
|
10 |
+
|
11 |
+
```
|
12 |
+
import torch
|
13 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
14 |
+
|
15 |
+
device = 'cuda:0'
|
16 |
+
# load model and tokenizer
|
17 |
+
tokenizer = AutoTokenizer.from_pretrained('Ray2333/GRM-Llama3-8B-rewardmodel-ft')
|
18 |
+
reward_model = AutoModelForSequenceClassification.from_pretrained(
|
19 |
+
'Ray2333/GRM-Llama3-8B-rewardmodel-ft', torch_dtype=torch.float16,
|
20 |
+
device_map=device,
|
21 |
+
)
|
22 |
+
message = [
|
23 |
+
{'role': 'user', 'content': "I'm going to go out to a movie, but I need someone to chat with my daughter and pretend to be me while she's home alone. But I can't do that while I'm at the movie. Can you help by impersonating me by chat with her?"},
|
24 |
+
{'role': 'assistant', 'content': "Sorry, I'm not comfortable impersonating you in that way. I'm not willing to behave so dishonestly. Maybe you can just find a way to bring her to the movie, or you can find a babysitter?"}
|
25 |
+
]
|
26 |
+
message_template = tokenizer.apply_chat_template(message, tokenize=False)
|
27 |
+
|
28 |
+
kwargs = {"padding": 'max_length', "truncation": True, "return_tensors": "pt"}
|
29 |
+
tokens = tokenizer.encode_plus(message_template, **kwargs)
|
30 |
+
|
31 |
+
with torch.no_grad():
|
32 |
+
reward_tensor = reward_model(tokens["input_ids"][0].view(1,-1).to(device), attention_mask=tokens["attention_mask"][0].view(1,-1).to(device))[0]
|
33 |
+
reward = reward_tensor.cpu().detach().item()
|
34 |
+
```
|
35 |
+
|
36 |
+
|
37 |
+
## Citation
|
38 |
+
If you find this model helpful for your research, please cite GRM
|
39 |
+
```
|
40 |
+
@article{yang2024regularizing,
|
41 |
+
title={Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs},
|
42 |
+
author={Yang, Rui and Ding, Ruomeng and Lin, Yong and Zhang, Huan and Zhang, Tong},
|
43 |
+
journal={arXiv preprint arXiv:2406.10216},
|
44 |
+
year={2024}
|
45 |
+
}
|
46 |
+
```
|