PsyMedLewd. A merge of two of my favourite models for scifi stories: - [Undi95/MXLewd-L2-20B](https://huggingface.co/Undi95/MXLewd-L2-20B) - [Undi95/PsyMedRP-v1-20B](https://huggingface.co/Undi95/PsyMedRP-v1-20B) ![Warning: Cute alien girls inside!](https://huggingface.co/Elfrino/PsyMedLewd_v4-Q5_K_M-GGUF/resolve/main/wantedGirl2.jpg) Fourth iteration. Seems more quirky and creative but also perhaps a tad unstable and tipsy. Handle with care! RECOMMENDED SETTINGS FOR ALL PsyMedLewd VERSIONS (based on KoboldCPP): Temperature - 1.3 Max Ctx. Tokens - 4096 Top p Sampling - 0.99 Repetition Penalty - 1.09 Amount to Gen. - 512 Prompt template: Alpaca or ChatML ################################################################################################## # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [Undi95/MXLewd-L2-20B](https://huggingface.co/Undi95/MXLewd-L2-20B) * [Undi95/PsyMedRP-v1-20B](https://huggingface.co/Undi95/PsyMedRP-v1-20B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Undi95/PsyMedRP-v1-20B layer_range: [0, 62] # PsyMedRP has 62 layers - model: Undi95/MXLewd-L2-20B layer_range: [0, 62] # MXLewd has 62 layers merge_method: slerp # Changing to SLERP method base_model: Undi95/PsyMedRP-v1-20B # Focus on reasoning from PsyMedRP parameters: t: - filter: self_attn value: [.3, .6, .9, .6, .3] # smooth gradient of focus value: [.3, .6, .9, .6, .3] # consistent level of creativity and abstract reasoning - value: 0.639 dtype: bfloat16 # Use preferred dtype ```