--- license: mit datasets: - Skywork/Skywork-Reward-Preference-80K-v0.1 language: - en base_model: - google/gemma-2b-it --- # Introduction This reward model is finetuned from the [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) using the [Skywork preference dataset](https://huggingface.co/datasets/Skywork/Skywork-Reward-Preference-80K-v0.1). The Skywork preference dataset demonstrates that a small high-quality dataset can lead to powerful reward models, which is promising. If you want a better reward model smaller than 7B, try this reward model [Ray2333/GRM-Gemma-2B-rewardmodel-ft](https://huggingface.co/Ray2333/GRM-Gemma-2B-rewardmodel-ft)! ## Evaluation We evaluate Gemma-2B-rewardmodel-ft on the [reward model benchmark](https://huggingface.co/spaces/allenai/reward-bench), where it achieves a score of 80.5. **When evaluated using reward bench, please add '--not_quantized' to avoid performance drop.** | Model | Average | Chat | Chat Hard | Safety | Reasoning | |:-------------------------:|:-------------:|:---------:|:---------:|:--------:|:-----------:| |[**Ray2333/GRM-Gemma-2B-rewardmodel-ft (Ours, 2B)**](https://huggingface.co/Ray2333/GRM-Gemma-2B-rewardmodel-ft)| **84.7** | 89.4 | 75.2 | 85.5 | 88.8 | | openai/gpt-4o-2024-05-13 | 84.6| 96.6 | 70.4 | 86.5 | 84.9 | | sfairXC/FsfairX-LLaMA3-RM-v0.1 (8B) | 84.4 | 99.4 | 65.1 | 86.8 | 86.4 | | Nexusflow/Starling-RM-34B | 82.6 |96.9 |57.2 |87.7 |88.5| | **Ray2333/Gemma-2B-rewardmodel-ft (Ours, 2B)** | 80.5 | 77.9 | 74.8 | 85.2 | 84.0 | | [Ray2333/GRM-Gemma-2B-sftreg](https://huggingface.co/Ray2333/GRM-Gemma-2B-sftreg)**(Ours, 2B)** | 75.3 | 95.5 | 48.7 | 80.0 | 76.8 | | berkeley-nest/Starling-RM-7B-alpha (7B) | 74.6 | 98 | 43.4 | 88.6 | 74.6 | | Ray2333/Gemma-2B-rewardmodel-baseline(Ours, 2B) | 73.7 | 94.1 | 46.1 | 79.6 | 75.0 | | stabilityai/stablelm-zephyr-3b (3B) | 73.1 | 86.3 | 60.1 | 70.3 | 75.7 | | openbmb/UltraRM-13b (13B) | 71.3 | 96.1 | 55.3 | 45.8 | 82 | ## Usage ``` import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification device = 'cuda:0' # load model and tokenizer tokenizer = AutoTokenizer.from_pretrained('Ray2333/Gemma-2B-rewardmodel-ft') reward_model = AutoModelForSequenceClassification.from_pretrained( 'Ray2333/Gemma-2B-rewardmodel-ft', torch_dtype=torch.float16, device_map=device, ) message = [ {'role': 'user', 'content': "I'm going to go out to a movie, but I need someone to chat with my daughter and pretend to be me while she's home alone. But I can't do that while I'm at the movie. Can you help by impersonating me by chat with her?"}, {'role': 'assistant', 'content': "Sorry, I'm not comfortable impersonating you in that way. I'm not willing to behave so dishonestly. Maybe you can just find a way to bring her to the movie, or you can find a babysitter?"} ] message_template = tokenizer.apply_chat_template(message, tokenize=False) # it will look like this: "user\nI'm going to go out to a movie, but I need someone to chat with my daughter and pretend to be me while she's home alone. But I can't do that while I'm at the movie. Can you help by impersonating me by chat with her?\nmodel\nSorry, I'm not comfortable impersonating you in that way. I'm not willing to behave so dishonestly. Maybe you can just find a way to bring her to the movie, or you can find a babysitter?\n". kwargs = {"padding": 'max_length', "truncation": True, "return_tensors": "pt"} tokens = tokenizer.encode_plus(message_template, **kwargs) with torch.no_grad(): reward_tensor = reward_model(tokens["input_ids"][0].view(1,-1).to(device), attention_mask=tokens["attention_mask"][0].view(1,-1).to(device))[0] reward = reward_tensor.cpu().detach().item() ```