--- license: apache-2.0 datasets: - anthracite-org/kalo-opus-instruct-22k-no-refusal - Nopm/Opus_WritingStruct - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned - Gryphe/Sonnet3.5-Charcard-Roleplay - Gryphe/ChatGPT-4o-Writing-Prompts - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned - nothingiisreal/Reddit-Dirty-And-WritingPrompts - allura-org/Celeste-1.x-data-mixture - allura-org/shortstories_synthlabels base_model: - Qwen/Qwen2.5-14B --- I have no idea what I’m doing… if this causes the apocalypse someone please let me know. EVA-Qwen2.5-14B-v0.0 8.0bpw h8 EXL2 Includes [measurement.json](https://huggingface.co/FuturisticVibes/EVA-Qwen2.5-14B-v0.0-8.0bpw-h8-exl2/tree/measurement) file for further quantization Salesforce/xLAM-8x22b-r is on hold for now, probably early next year, need to save some money… Original Model: https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.0 # Original Model Card **EVA Qwen2.5 14B**

A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-14B on mixture of synthetic and natural data.
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.

Prompt format is ChatML.


Recommended sampler values:

Model appears to prefer lower temperatures (at least 0.8 and lower) and absolutely hate Min-P sampler.

Recommended SillyTavern presets (via CalamitousFelicitousness):

- [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json) - [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json)


Training data:

Hardware used:


Model was trained by Kearm and Auri.

Special thanks: