Text Generation
GGUF
English
Spanish
GGUF
conversational
chat
roleplay
Inference Endpoints
XeTute commited on
Commit
a5ea12e
1 Parent(s): 09ccde3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -48
README.md CHANGED
@@ -8,20 +8,7 @@ language:
8
  tags:
9
  - conversational
10
  - chat
11
- - rp
12
  - roleplay
13
- - friend
14
- - slm
15
- - small
16
- - slim
17
- - slender
18
- - general
19
- - creative
20
- co2_eq_emissions:
21
- emissions: 200
22
- training_type: fine-tuning
23
- hardware_used: 1 GTX1060-3GB, AMD Radeon(TM) Graphics & AMD Ryzen 5 5600G[4.4GHz OC]
24
- base_model: XeTute/AURORA-OpenBeta-V0.5-GGUF
25
  library_name: GGUF
26
  pipeline_tag: text-generation
27
  ---
@@ -30,40 +17,44 @@ pipeline_tag: text-generation
30
 
31
  <a href='https://ko-fi.com/C0C2ZXNON' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi3.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
32
 
33
- NOTE / ANNOUNCEMENT:
34
- We've jumped from V0.5 to this version, V1.0, this is the last version of the series.
35
- We're sad to announce the end of XT_AURORA, our first SLM series, due to no community activity.
36
- We, XeTute, have put in a lot of effort and countless nights to improve our models, but given on how much time, passion and effort we've put in, we got nothing back from the community.
37
- Thank you for so many downloads on this series of SLMs. We'll continue to update model cards and chat templates.
38
- Thank you for being part of our journey.
39
-
40
- About this model:
41
- This model, XT_AURORA, trained and published by us, XeTute. The model was finetuned ontop of the previos beta-verion[XT_AURORA-OpenBeta-V0.5-GGUF].
42
- This version[V1.0] achieves better general performance, it outperforms every previos model[V0.1 - V0.5].
43
- We asked ChatGPT4o to ask some questions, and rate the answers on a score of 1 to 10.
44
- The average rating was 7.5.
45
-
46
- About XT_AURORA:
47
- XT_AURORA is a series of SLMs[Slender Language Models], which all aim to provide a friendly, human-like conversation.
48
- The serie is limited by its size[about 1.1B Params], but we still try to get the best possible output.
49
- The context-length is very stable till 2048 tokens, after that limit, it will perform only slightly better than V0.5.
50
- It can be upscaled using rope, with the cost being slightly less logic.
51
-
52
- About this version[V1.0]:
53
- * High quality output[sometimes outperforms 3B models in HumanEval], as long as the context size is under 2049 Tokens.
54
- * We provide a system prompt[Files and Versions --> chat_template]. The SLM was partly trained using that template, so the output is better if you use the prompt at start.
55
- * AURORA expects the chat template to be Vicuna[{{user}}: {some input}\nAURORA: {some output}\n{{user}}]. The model will only work correctly with this format.
56
- * Recommended temperature is from 0.4 to 0.75.
57
- * Improved chat quality in general emotional / unemotional chat, logical & illogical roleplaying, etc.
58
-
59
- All in one, AURORA's aim is to provide a digital friend, which is also accessible to humans with low-end devices.
60
-
61
- Using KoboldCPP, we got the model running[using termux] on a POCO X5 Pro 5G[CPU only, Octa Core].
62
- We saw ~5 Tokens generation per second, ~15 Tokens processing per second. [In Energy Saver mode]
63
-
64
- Support us:
65
- X: <https://www.x.com/XeTute>
66
- GitHub: <https://www.github.com/N0CTRON/>
67
- Subdomain on Neocities: <https://xetute.neocities.org/>
 
 
 
 
68
 
69
  We wish you a friendly chat with AURORA.
 
8
  tags:
9
  - conversational
10
  - chat
 
11
  - roleplay
 
 
 
 
 
 
 
 
 
 
 
 
12
  library_name: GGUF
13
  pipeline_tag: text-generation
14
  ---
 
17
 
18
  <a href='https://ko-fi.com/C0C2ZXNON' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi3.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
19
 
20
+ Note:
21
+ - All previous beta versions of this series of SLMs were deleted, because almost no downloads were made.
22
+ - V1.0 is the last model in this series which will be published, because of too little community activity.
23
+
24
+ Metadata:
25
+ - Name:
26
+ - - AURORA
27
+ - Version:
28
+ - - 1.0
29
+ - Author:
30
+ - - XeTute
31
+ - Size:
32
+ - - 1.1B
33
+ - Architecture:
34
+ - - LLaMA, Transformer.
35
+
36
+ We introduce AURORA V1.0 - the first model in this series which is actually use able.
37
+ Its usecases are following:
38
+ - Next-Word prediction for mobile devices:
39
+ - - This Model can be reliably packaged into a keyboard-app to help make Next-Word suggestions more accurate.
40
+ - Conversations:
41
+ - - AURORA can engage in conversations using the Vicuna format, remember to replace "ASSISTANT" with "AURORA" though.
42
+ - - AURORA can engage in SFW roleplay with simple character definitions. It wasn't trained on NSFW.
43
+ - - AURORA can engage in simple, short Q&A. It was trained on factual data too, which means it performs well for its size.
44
+
45
+ Recommended settings:
46
+ - Temperature 0.1 - 0,4 is stable.
47
+ - Context Length of 2048(base) to 4096(RoPE) will work well for story-telling, role-playing and simple conversations.
48
+ - Output Length: 256 will work very stable, but you can extent to 512. Anything beyond that point is risky, text might become repetitous.
49
+ - Chat Format:
50
+ ```For roleplay:
51
+ {name of your roleplay}: {input}
52
+ {name of AURORA's character}: {output}
53
+ ```
54
+ or,
55
+ ```For normal chatting:
56
+ USER: {input}
57
+ AURORA: {output}
58
+ ```
59
 
60
  We wish you a friendly chat with AURORA.