File size: 2,619 Bytes
e43f9d2
210f20f
e43f9d2
 
 
 
 
 
 
 
 
 
 
 
 
1aa9615
e43f9d2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
license: apache-2.0
base_model:
- localfultonextractor/Erosumika-7B
- Nitral-AI/Infinitely-Laydiculous-7B
- Kunocchini-7b-128k-test
- Endevor/EndlessRP-v3-7B
- ChaoticNeutrals/BuRP_7B
- daybreak-kunoichi-2dpo-7b
library_name: transformers
tags:
- mergekit
- merge
---

My first merge of RP models 7B using mergekit, They are just r/ trend RP, half is BuRP_7B. not used any, **Dumb** merge but hopfully lucky merge! ^^'  
<div style="width: auto; margin-left: auto; margin-right: auto">
    <img src="https://i.imgur.com/d38LuOG.png" alt="Nekochu" style="width: 250%; min-width: 400px; display: block; margin: auto;">
</div>

Name symbolize by *Confluence* for many unique RP model with *Renegade* mostly come from no-guardrail.

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: ./modela/Erosumika-7B
    parameters:
      density: [1, 0.8, 0.6]
      weight: 0.2
  - model: ./modela/Infinitely-Laydiculous-7B
    parameters:
      density: [0.9, 0.7, 0.5]
      weight: 0.2
  - model: ./modela/Kunocchini-7b-128k-test
    parameters:
      density: [0.8, 0.6, 0.4]
      weight: 0.2
  - model: ./modela/EndlessRP-v3-7B
    parameters:
      density: [0.7, 0.5, 0.3]
      weight: 0.2
  - model: ./modela/daybreak-kunoichi-2dpo-7b
    parameters:
      density: [0.5, 0.3, 0.1]
      weight: 0.2
merge_method: dare_linear
base_model: ./modela/Mistral-7B-v0.1
parameters:
  normalize: true
  int8_mask: true
dtype: bfloat16
name: intermediate-model
---
slices:
  - sources:
      - model: intermediate-model
        layer_range: [0, 32]
      - model: ./modela/BuRP_7B
        layer_range: [0, 32]
merge_method: slerp
base_model: intermediate-model
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: bfloat16
name: gradient-slerp
```

```mergekit-mega config.yml ./output-model-directory --cuda --allow-crimes --lazy-unpickle```

### Models Merged

The following models were included in the merge:
- [localfultonextractor/Erosumika-7B](https://huggingface.co/localfultonextractor/Erosumika-7B)
- [Nitral-AI/Infinitely-Laydiculous-7B](https://huggingface.co/Nitral-AI/Infinitely-Laydiculous-7B)
- [Kunocchini-7b-128k-test](https://huggingface.co/Nitral-AI/Kunocchini-7b-128k-test)
- [Endevor/EndlessRP-v3-7B](https://huggingface.co/Endevor/EndlessRP-v3-7B)
- [ChaoticNeutrals/BuRP_7B](https://huggingface.co/ChaoticNeutrals/BuRP_7B)
- [daybreak-kunoichi-2dpo-7b](https://huggingface.co/crestf411/daybreak-kunoichi-2dpo-7b)