Edit model card

Made with NovelAI Welcome, brave one; you've come a long mile.

MN-12B-Mag-Mell-R1

This is a merge of pre-trained language models created using mergekit.

Q4_K_M, Q6_K and Q_8 GGUFs by me

More available from mradermacher

Usage Details

Sampler Settings

Mag Mell R1 was tested with Temp 1.25 and MinP 0.2. This was fairly stable up to 10K, but this might be too "hot". If issues with coherency occur, try increasing MinP or decreasing Temperature.

Other samplers shouldn't be necessary. XTC was shown to break outputs. DRY should be okay if used sparingly. Other penalty-type samplers should probably be avoided.

Formatting

The base model for Mag Mell is Mistral-Nemo-Base-2407-chatml, and as such ChatML formatting is recommended.

However, many component models still use Mistral's format. As a result, occasionally the word "user" or "assistant" will appear on the bottom of the screen.

However. Some things have come out regarding Mistral's format that should be covered here, and implicates not just Mag Mell, but all Mistral-based models since the original Mistral 7B.

The following information is as correct as I can get it as of September 20th, 2024

We've had Mistral's tokenizer handling and completions format all wrong. The templates in your frontend are probably wrong right now.

MistralAI member Pandora has been going around helping to correct everyone.

Right now, Pandora has opened PRs for SillyTavern (MERGED to Staging. Update and use Mistral V3-Tekken), KoboldAI Lite and KoboldCPP.

When these are merged, then the templates in them can be assumed to be completely correct.

Until then, I've provided templates for SillyTavern on GitHub that should be More Correct than the ones ST currently ships. Use Mistral V3-Tekken from SillyTavern Staging. If you don't want to/can't update, you can get the new prompt template files here (in the context and instruct folders.)

If you experiment with this, please let me know how it goes! The conversation on how to properly implement Mistral is still ongoing.

Merge Details

Multi-stage SLERP merge, DARE-TIES'd together. Intended to be a general purpose "Best of Nemo" model for any fictional, creative use case. Inspired by hyper-merges like Tiefighter and Umbral Mind.

Mag Mell is composed of 3 intermediate parts:

I've been dreaming about this merge since Nemo tunes started coming out in earnest. From our testing, Mag Mell demonstrates worldbuilding capabilities unlike any model in its class, comparable to old adventuring models like Tiefighter, and prose that exhibits minimal "slop" (not bad for no finetuning,) frequently devising electrifying metaphors that left us consistently astonished.

Use ChatML formatting. Early testing versions had a tendency to leak tokens, but this should be more or less hammered out.

I don't want to toot my own bugle though; I'm really proud of how this came out, but please leave your feedback, good or bad.

Special thanks as usual to Toaster for his feedback and Fizz for helping fund compute, as well as the KoboldAI Discord for their resources.

Merge Method

This model was merged using the DARE TIES merge method using IntervitensInc/Mistral-Nemo-Base-2407-chatml as a base.

Models Merged

The following models were included in the merge:

  • IntervitensInc/Mistral-Nemo-Base-2407-chatml
  • nbeerbower/mistral-nemo-bophades-12B
  • nbeerbower/mistral-nemo-wissenschaft-12B
  • elinas/Chronos-Gold-12B-1.0
  • Fizzarolli/MN-12b-Sunrose
  • nbeerbower/mistral-nemo-gutenberg-12B-v4
  • anthracite-org/magnum-12b-v2.5-kto

Configuration

The following YAML configurations were used to produce this model:

Monk:

models:
  - model: nbeerbower/mistral-nemo-bophades-12B
  - model: nbeerbower/mistral-nemo-wissenschaft-12B
merge_method: slerp
base_model: nbeerbower/mistral-nemo-bophades-12B
parameters:
  t: [0.1, 0.2, 0.4, 0.6, 0.6, 0.4, 0.2, 0.1]
dtype: bfloat16
tokenizer_source: base

Hero:

models:
  - model: elinas/Chronos-Gold-12B-1.0
  - model: Fizzarolli/MN-12b-Sunrose
merge_method: slerp
base_model: elinas/Chronos-Gold-12B-1.0
parameters:
  t: [0.1, 0.2, 0.4, 0.6, 0.6, 0.4, 0.2, 0.1]
dtype: bfloat16
tokenizer_source: base

Deity:

models:
  - model: nbeerbower/mistral-nemo-gutenberg-12B-v4
  - model: anthracite-org/magnum-12b-v2.5-kto
merge_method: slerp
base_model: nbeerbower/mistral-nemo-gutenberg-12B-v4
parameters:
  t: [0, 0.1, 0.2, 0.25, 0.25, 0.2, 0.1, 0]
dtype: bfloat16
tokenizer_source: base

Mag Mell:

models:
 - model: monk
   parameters:
     density: 0.7
     weight: 0.5
 - model: hero
   parameters:
     density: 0.9
     weight: 1
 - model: deity
   parameters:
     density: 0.5
     weight: 0.7
merge_method: dare_ties
base_model: IntervitensInc/Mistral-Nemo-Base-2407-chatml
tokenizer_source: base

In Irish mythology, Mag Mell (modern spelling: Magh Meall, meaning 'delightful plain') is one of the names for the Celtic Otherworld, a mythical realm achievable through death and/or glory... Never explicitly stated in any surviving mythological account to be an afterlife; rather, it is usually portrayed as a paradise populated by deities, which is occasionally visited by some adventurous mortals. In its island guise, it was visited by various legendary Irish heroes and monks, forming the basis of the adventure myth or echtrae...

Downloads last month
96
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for inflatebot/MN-12B-Mag-Mell-R1

Collection including inflatebot/MN-12B-Mag-Mell-R1