Lewdiculous commited on
Commit
9c36952
1 Parent(s): d770f62

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -0
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - ChaoticNeutrals/BuRP_7B
4
+ - Endevor/InfinityRP-v1-7B
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ license: other
10
+ inference: false
11
+ language:
12
+ - en
13
+ ---
14
+
15
+ This repository hosts GGUF-IQ-Imatrix quants for [ChaoticNeutrals/Eris-Lelanacles-7b](https://huggingface.co/ChaoticNeutrals/Eris-Lelanacles-7b).
16
+
17
+ Thanks @jeiku for merging this!
18
+
19
+ This is an experimental model. Feedback is appreciated as always.
20
+
21
+ **Steps:**
22
+
23
+ ```
24
+ Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
25
+ ```
26
+ *Using the latest llama.cpp at the time.*
27
+
28
+ ```python
29
+ quantization_options = [
30
+ "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
31
+ "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
32
+ ]
33
+ ```
34
+
35
+ ---
36
+
37
+ # BuRPInfinity_9B
38
+
39
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/l8WTAhHEV-SKQ8_BZs_ZH.jpeg)
40
+
41
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
42
+
43
+ ## Merge Details
44
+ ### Merge Method
45
+
46
+ This model was merged using the passthrough merge method.
47
+
48
+ ### Models Merged
49
+
50
+ The following models were included in the merge:
51
+ * [ChaoticNeutrals/BuRP_7B](https://huggingface.co/ChaoticNeutrals/BuRP_7B)
52
+ * [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B)
53
+
54
+ ### Configuration
55
+
56
+ The following YAML configuration was used to produce this model:
57
+
58
+ ```yaml
59
+ slices:
60
+ - sources:
61
+ - model: Endevor/InfinityRP-v1-7B
62
+ layer_range: [0, 20]
63
+ - sources:
64
+ - model: ChaoticNeutrals/BuRP_7B
65
+ layer_range: [12, 32]
66
+ merge_method: passthrough
67
+ dtype: float16
68
+ ```