File size: 1,576 Bytes
c95cea4
 
 
 
 
 
 
 
1f52edf
 
642c3d0
 
 
c95cea4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12cabf5
 
 
c95cea4
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
datasets:
- HuggingFaceH4/no_robots
language:
- en
license: cc-by-nc-4.0
---

# Good Robot 🤖

> [!NOTE]
> → There is an updated version of this model available, please see [Good Robot 2 →](https://huggingface.co/kubernetes-bad/good-robot-2).


The model "Good Robot" had one simple goal in mind: to be a good instruction-following model that doesn't talk like ChatGPT.

Built upon the Mistral 7b base, this model aims to provide responses that are as human-like as possible, thanks to some DPO training using the (for now, private) `minerva-ai/yes-robots-dpo` dataset.


HuggingFaceH4/no-robots was used as the base for generating a custom dataset to create DPO pairs.

It should follow instructions and be generally as smart as a typical Mistral model - just not as soulless and full of GPT slop.

## Prompt Format:

Alpaca, my beloved ❤️
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{your prompt goes here}

### Response:
```

## Huge Thanks:
- Gryphe for DPO scripts and all the patience 🙏

## Training Data:
- [HuggingFaceH4/no_robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots)
- [MinervaAI/yes-robots-dpo](https://huggingface.co/MinervaAI)
- private datasets with common GPTisms


## Limitations:

While I did my best to minimize GPTisms, no model is perfect, and there may still be instances where the generated content has GPT's common phrases - I have a suspicion that's due to them being engrained into Mistral model itself.

## License:
cc-by-nc-4.0