Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
LnL-AI
/
dbrx-base-converted-v2-4bit-gptq-gptq
like
1
Text Generation
Transformers
dbrx
custom_code
text-generation-inference
Inference Endpoints
4-bit precision
gptq
arxiv:
2211.15841
arxiv:
2304.11277
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
dbrx-base-converted-v2-4bit-gptq-gptq
1 contributor
History:
14 commits
Qubitium
Update README.md
425d135
verified
6 months ago
.gitattributes
1.67 kB
Upload gptq_model-4bit-128g.safetensors.part1 with huggingface_hub
6 months ago
README.md
12.7 kB
Update README.md
6 months ago
combine_tensors.sh
92 Bytes
Create combine_tensors.sh
6 months ago
config.json
1.24 kB
Upload config.json with huggingface_hub
6 months ago
configuration_dbrx.py
10.8 kB
Upload configuration_dbrx.py with huggingface_hub
6 months ago
gptq_model-4bit-128g.safetensors.part1
37.6 GB
LFS
Upload gptq_model-4bit-128g.safetensors.part1 with huggingface_hub
6 months ago
gptq_model-4bit-128g.safetensors.part2
32.7 GB
LFS
Upload gptq_model-4bit-128g.safetensors.part2 with huggingface_hub
6 months ago
modeling_dbrx.py
62.9 kB
Create modeling_dbrx.py
6 months ago
quantize_config.json
269 Bytes
Upload quantize_config.json with huggingface_hub
6 months ago
special_tokens_map.json
587 Bytes
Upload special_tokens_map.json with huggingface_hub
6 months ago
tiktoken.py
17 kB
Upload tiktoken.py with huggingface_hub
6 months ago
tokenizer_config.json
753 Bytes
Upload tokenizer_config.json with huggingface_hub
6 months ago