Edit model card

Model Card for Model ID

This is quantized 4bit and 5bit model versions of the dragon-mistral-7b-v0 by llmware. Quantized models are in gguf format and done using llama.cpp

Model Description

For more info about prompt template refer the original model repo.

Original Model Details:

Creator: llmware

Link: https://huggingface.co/llmware/dragon-mistral-7b-v0

Downloads last month
13
GGUF
Model size
7.24B params
Architecture
llama

4-bit

5-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.