--- base_model: - tavtav/Rose-20B - DavidAU/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32 library_name: transformers tags: - mergekit - merge - llama ---

Psyonic-Rose 20B FP32

### Speculative recreation of jebcarter Psyonic-Rose-20B (Llama2) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/304PSqR4WSUQlENjBSc10.png) #

Thanks mradermacher for the quants!

* [GGUF](https://huggingface.co/mradermacher/Psyonic-Rose-20B-Higher-Quality-GGUF) * [GGUF imatrix](https://huggingface.co/mradermacher/Psyonic-Rose-20B-Higher-Quality-i1-GGUF) #

merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ##

Merge Details

###

Merge Method

This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ###

Models Merged

The following models were included in the merge: * [tavtav/Rose-20B](https://huggingface.co/tavtav/Rose-20B) * [DavidAU/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32](https://huggingface.co/DavidAU/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32) ###

Configuration

The following YAML configuration was used to produce this model: ```yaml models: - model: DavidAU/Psyonic-Cetacean-V1-20B-Ultra-Quality-Float32 parameters: weight: 1.0 - model: tavtav/Rose-20B(fp16) parameters: weight: 0.05 merge_method: linear dtype: float32 ``` * credits jebcarter * credits DavidAU * credtis tavtav * credits NeverSleep * credits CalderaAI {{{Alpaca instruct format}}}