Update README.md
Browse files
README.md
CHANGED
@@ -19,11 +19,7 @@ The dataset used to fine-tune this model is available [here](https://huggingface
|
|
19 |
|
20 |
This model was fine-tuned with a fork of FastChat, and therefore uses the standard vicuna template:
|
21 |
```
|
22 |
-
USER:
|
23 |
-
[prompt]
|
24 |
-
|
25 |
-
<\s>
|
26 |
-
ASSISTANT:
|
27 |
```
|
28 |
|
29 |
The most important bit, to me, is the context obedient question answering support, without extensive prompt engineering.
|
@@ -42,9 +38,6 @@ Then, you can invoke it like so (after downloading the model):
|
|
42 |
python -m fastchat.serve.cli
|
43 |
--model-path airoboros-7b-gpt4 \
|
44 |
--temperature 0.5 \
|
45 |
-
--max-new-tokens 4096 \
|
46 |
-
--context-length 4096 \
|
47 |
-
--conv-template vicuna_v1.1 \
|
48 |
--no-history
|
49 |
```
|
50 |
|
|
|
19 |
|
20 |
This model was fine-tuned with a fork of FastChat, and therefore uses the standard vicuna template:
|
21 |
```
|
22 |
+
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
|
|
|
|
|
|
|
|
|
23 |
```
|
24 |
|
25 |
The most important bit, to me, is the context obedient question answering support, without extensive prompt engineering.
|
|
|
38 |
python -m fastchat.serve.cli
|
39 |
--model-path airoboros-7b-gpt4 \
|
40 |
--temperature 0.5 \
|
|
|
|
|
|
|
41 |
--no-history
|
42 |
```
|
43 |
|