Edit model card

Miquella 120B

Model has been remade with the fixed dequantization of miqu.

This is a merge of pre-trained language models created using mergekit. An attempt at re-creating goliath-120b using the new miqu-1-70b model instead of Xwin.

The merge ratios are the same as goliath, only that Xwin is swapped with miqu.

Models Merged

The following models were included in the merge:

image/png Miquella the Unalloyed, by @eldrtchmoon

Downloads last month
11
Safetensors
Model size
118B params
Tensor type
FP16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.