Doctor-Shotgun commited on
Commit
ede7d6e
1 Parent(s): 84e2892

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -10
README.md CHANGED
@@ -1,32 +1,79 @@
1
  ---
 
2
  tags:
3
  - generated_from_trainer
 
4
  model-index:
5
- - name: limarp-lora-out
6
  results: []
 
7
  ---
8
 
9
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
10
- should probably proofread and complete it, then remove this comment. -->
11
-
12
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
13
- # limarp-lora-out
 
 
 
 
14
 
15
- This model was trained from scratch on the None dataset.
16
  It achieves the following results on the evaluation set:
17
  - Loss: 1.9729
18
 
19
  ## Model description
20
 
21
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
  ## Intended uses & limitations
24
 
25
- More information needed
26
 
27
  ## Training and evaluation data
28
 
29
- More information needed
30
 
31
  ## Training procedure
32
 
@@ -78,4 +125,4 @@ The following hyperparameters were used during training:
78
  - Transformers 4.34.1
79
  - Pytorch 2.0.1+cu118
80
  - Datasets 2.14.6
81
- - Tokenizers 0.14.1
 
1
  ---
2
+ inference: false
3
  tags:
4
  - generated_from_trainer
5
+ - Yi
6
  model-index:
7
+ - name: limarpv3-yi-llama-34b-lora
8
  results: []
9
+ license: apache-2.0
10
  ---
11
 
 
 
 
12
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
13
+ # limarpv3-yi-llama-34b-lora
14
+
15
+ This model is an unofficial Yi-34B-Llama training on the LimaRP v3 dataset by [lemonilia](https://huggingface.co/lemonilia). It does not include the pretraining stage using stories.
16
+
17
+ The [Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) model is a modified [01-ai/Yi-34B](https://huggingface.co/01-ai/Yi-34B) with keys renamed to match those used in Llama models, eliminating the need for remote code and ensuring compatibility with existing training and inference repositories. Architecturally this is similar to a Llama 2 34B model with an expanded vocab size of 64000.
18
 
 
19
  It achieves the following results on the evaluation set:
20
  - Loss: 1.9729
21
 
22
  ## Model description
23
 
24
+ For more details about LimaRP, see the model page for the [previously released v2 version for Llama-2](https://huggingface.co/lemonilia/limarp-llama2-v2). Most details written there apply for this version as well. Generally speaking, LimaRP is a longform-oriented, novel-style roleplaying chat model intended to replicate the experience of 1-on-1 roleplay on Internet forums. Short-form, IRC/Discord-style RP (aka "Markdown format") is not supported yet. The model does not include instruction tuning, only manually picked and slightly edited RP conversations with persona and scenario data.
25
+
26
+ Prompt format is the [extended Alpaca format](https://github.com/tatsu-lab/stanford_alpaca):
27
+
28
+ ```
29
+ ### Instruction:
30
+ Character's Persona: {bot character description}
31
+ User's Persona: {user character description}
32
+ Scenario: {what happens in the story}
33
+ Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
34
+ ### Input:
35
+ User: {utterance}
36
+ ### Response:
37
+ Character: {utterance}
38
+ ### Input
39
+ User: {utterance}
40
+ ### Response:
41
+ Character: {utterance}
42
+ (etc.)
43
+ ```
44
+
45
+ Inspired by the previously named "Roleplay" preset in SillyTavern, with this version of LimaRP it is possible to append a length modifier to the response instruction sequence, like this:
46
+
47
+ ```
48
+ ### Input
49
+ User: {utterance}
50
+
51
+ ### Response: (length = medium)
52
+ Character: {utterance}
53
+ ```
54
+
55
+ This has an immediately noticeable effect on bot responses. The lengths using during training are:
56
+ `micro`, `tiny`, `short`, `medium`, `long`, `massive`, `huge`, `enormous`, `humongous`, `unlimited`.
57
+ **The recommended starting length is medium**. Keep in mind that the AI can ramble or impersonate
58
+ the user with very long messages.
59
+
60
+ The length control effect is reproducible, but the messages will not necessarily follow
61
+ lengths very precisely, rather follow certain ranges on average, as seen in this table
62
+ with data from tests made with one reply at the beginning of the conversation:
63
+
64
+ ![lengths](https://i.imgur.com/2WXGgaV.png)
65
+
66
+ Response length control appears to work well also deep into the conversation. **By omitting
67
+ the modifier, the model will choose the most appropriate response length** (although it might
68
+ not necessarily be what the user desires).
69
 
70
  ## Intended uses & limitations
71
 
72
+ The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model.
73
 
74
  ## Training and evaluation data
75
 
76
+ For more details about LimaRP, see the model page for the [previously released v2 version for Llama-2](https://huggingface.co/lemonilia/limarp-llama2-v2).
77
 
78
  ## Training procedure
79
 
 
125
  - Transformers 4.34.1
126
  - Pytorch 2.0.1+cu118
127
  - Datasets 2.14.6
128
+ - Tokenizers 0.14.1