Model-Requests / README.md
Spacellary's picture
Update README.md
230a71f verified
|
raw
history blame
1.6 kB
metadata
license: cc-by-4.0
tags:
  - requests
  - gguf
  - quantized

requests-banner/png

Welcome to my GGUF-IQ-Imatrix Model Quantization Requests card!

Please read everything.

This card is meant only to request GGUF-IQ-Imatrix quants for models that meet the requirements bellow.

Requirements to request GGUF-Imatrix model quantizations:

For the model:

  • Maximum model parameter size of 11B.
    At the moment I am unable to accept requests for larger models due to hardware/time limitations. Preferably for Mistral based models in the creative/roleplay niche.

Important:

  • Fill the request template as outlined in the next section.

How to request a model quantization:

  1. Open a New Discussion titled "Request: Model-Author/Model-Name", for example, "Request: Nitral-AI/Infinitely-Laydiculous-7B", without the quotation marks.

  2. Include the following template in your post and fill the required information (example request here):

**[Required] Model name:**
Ans:

**[Required] Model link:**
Ans:

**[Required] Brief description:**
Ans:

**[Required] An image/direct image link to represent the model (square shaped):**
Ans:

**[Optional] Additonal quants (if you want any):**
Ans:

Default list of quants for reference:

        "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
        "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"