File size: 979 Bytes
7141798
 
bb0ee84
 
7141798
ad1fa81
 
bb0ee84
 
 
 
b22b2e4
 
2069895
8137ec6
2069895
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
---
license: bigcode-openrail-m
pipeline_tag: text-generation
library_name: gguf
---
**NOTE**: This is currently an unsupported model, for testing [PR#5795](https://github.com/ggerganov/llama.cpp/pull/5795)

GGUF quants for https://huggingface.co/bigcode/starcoder2-15b  

> StarCoder2-15B model is a 15B parameter model trained on 600+ programming languages from The Stack v2, with opt-out requests excluded. The model uses Grouped Query Attention, a context window of 16,384 tokens with a sliding window attention of 4,096 tokens, and was trained using the Fill-in-the-Middle objective on 4+ trillion tokens.

> The model was trained on GitHub code as well as additional selected data sources such as Arxiv and Wikipedia. As such it is not an instruction model and commands like "Write a function that computes the square root." do not work well.

| Layers | Context | Template (None/Base Model) |
| --- | --- | --- |
| <pre>40</pre> | <pre>16384</pre> | <pre>{prompt}</pre> |