File size: 1,598 Bytes
f3658f0
 
c7c7f9c
 
 
 
f3658f0
ed393e0
 
 
c7c7f9c
 
51de1f5
 
1185955
51de1f5
b3c1263
c7c7f9c
 
 
 
1e0d42b
c7c7f9c
 
 
 
 
 
24d7af4
c7c7f9c
24d7af4
c7c7f9c
 
92fd3f9
230a71f
c7c7f9c
92fd3f9
230a71f
01002cd
92fd3f9
230a71f
01002cd
92fd3f9
230a71f
92fd3f9
 
230a71f
b9a77d9
ff3a6c5
51f1643
 
 
94a49e9
c7c7f9c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
license: cc-by-4.0
tags:
- requests
- gguf
- quantized
---

![requests-banner/png](https://huggingface.co/Lewdiculous/Model-Requests/resolve/main/requests-banner.png)

# Welcome to my GGUF-IQ-Imatrix Model Quantization Requests card!

Please read everything.

This card is meant only to request GGUF-IQ-Imatrix quants for models that meet the requirements bellow.

**Requirements to request GGUF-Imatrix model quantizations:**

For the model:
- Maximum model parameter size of **11B**. <br>
*At the moment I am unable to accept requests for larger models due to hardware/time limitations.*
*Preferably for Mistral based models in the creative/roleplay niche.*

Important:
- Fill the request template as outlined in the next section.

#### How to request a model quantization:

1. Open a [**New Discussion**](https://huggingface.co/Lewdiculous/Model-Requests/discussions/new) titled "`Request: Model-Author/Model-Name`", for example, "`Request: Nitral-AI/Infinitely-Laydiculous-7B`", without the quotation marks.

2. Include the following template in your post and fill the required information ([example request here](https://huggingface.co/Lewdiculous/Model-Requests/discussions/1)):

```
**[Required] Model name:**
Ans:

**[Required] Model link:**
Ans:

**[Required] Brief description:**
Ans:

**[Required] An image/direct image link to represent the model (square shaped):**
Ans:

**[Optional] Additonal quants (if you want any):**
Ans:

Default list of quants for reference:

        "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
        "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"

```