--- license: apache-2.0 language: - en tags: - creative - creative writing - fiction writing - plot generation - sub-plot generation - fiction writing - story generation - scene continue - storytelling - fiction story - science fiction - romance - all genres - story - writing - vivid prosing - vivid writing - fiction - roleplaying - bfloat16 - brainstorm 20x - swearing - rp - horror - llama3 - mergekit pipeline_tag: text-generation --- (quants uploading...)

Psyonic-Cetacean-Depth-Charge-13B

It is a LLama2 model, max context of 4096 (or 16k+ with rope). This model has been designed to be relatively bullet proof and operates with most parameters, including temp settings from 0 to 5. This model is for any writing, fiction or story telling activity as well as roleplay and other creative actitivies. This compressed version is an ODE to the original "Psyonic-Cetacean 20B" by Jeb Carter. It requires "Llama2" template and/or "Alpaca" template. Example outputs below. Psyonic-Cetacean? ... wait a minute. Yes, this is a compressed version of Jeb Carter's fantasic "Psyonic-Cetacean 20B". I used the same float 32 version files to create this version as was used to create the "Ultra" versions here: [ https://huggingface.co/DavidAU/Psyonic-Cetacean-Ultra-Quality-20b-GGUF ] The main difference is the size 13B vs 20B, and this version's perplexity is much lower than the original or ultra versions. This version will also be faster in per token per second generation too. Although every attempt was made to preserve all fuctions, features and voice of the original 20B there will be some slight differences. However this model will work at all parameter settings due to the compression and style of this merge. All models used (and their upstream counterparts) were used to create this 13B version. Model Template: This is a LLAMA2 model, and requires Alpaca or Llama2 template, but may work with other template(s) and has maximum context of 4k / 4096. However this can be extended using "rope" settings up to 16k. Here is the standard ALPACA template (best for story telling / long form):
  {
  "name": "Alpaca",
  "inference_params": {
    "input_prefix": "### Instruction:",
    "input_suffix": "### Response:",
    "antiprompt": [
      "### Instruction:"
    ],
    "pre_prompt": "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n"
  }
}  
Here is the standard LLAMA2 template (best for general usage):
{
  "name": "Llama 2",
  "inference_params": {
    "input_prefix": "[INST]",
    "input_suffix": "[/INST]",
    "antiprompt": [
      "[INST]"
    ],
    "pre_prompt_prefix": "",
    "pre_prompt_suffix": ""
  }
}
Model "DNA": Special thanks to the incredible work of the model makers "Microsoft", and "KoboldAI". Models used: [ https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2-GGUF ] [ https://huggingface.co/microsoft/Orca-2-13b ] Jeb Carter: [ https://huggingface.co/jebcarter/psyonic-cetacean-20B ] And models used in "LLaMA2-13B-Psyfighter2" (used in full at Float32 to recreate this model): [ https://huggingface.co/TheBloke/Llama-2-13B-fp16 ] [ https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter ] [ https://huggingface.co/Doctor-Shotgun/cat-v1.0-13b ] [ https://huggingface.co/Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged ] Parts of these models were "grafted" / "fused" together to create this model. Optional Enhancement: The following can be used in place of the "system prompt" or "system role" to further enhance the model. It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along. In this case the enhancements do not have as strong effect at using "system prompt" or "system role". Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.

Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)

[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)

Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation and scene continue functions. This enhancement WAS NOT used to generate the examples below.

EXAMPLES PROMPTS and OUTPUT:

Examples are created using quant Q4_K_M, "temp=.8" (unless otherwise stated), minimal parameters and "LLAMA3" template. Model has been tested with "temp" from ".1" to "5". Below are the least creative outputs, prompt is in BOLD. ---