Edit model card

Llama-3.2B-Instruct-TIES

Overview

The Llama-3.2B-Instruct-TIES model is a result of merging three versions of Llama-3.2B models using the TIES merging method, facilitated by mergekit. This merge combines a base general-purpose language model with two instruction-tuned models to create a more powerful and versatile model capable of handling diverse tasks.

Model Details

Model Description

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: meta-llama/Llama-3.2-3B
    # Base model
  - model: meta-llama/Llama-3.2-3B-Instruct
    parameters:
      density: 0.5
      weight: 0.5
  - model: unsloth/Llama-3.2-3B-Instruct
    parameters:
      density: 0.5
      weight: 0.3
merge_method: ties
base_model: meta-llama/Llama-3.2-3B
parameters:
  normalize: true
dtype: float16
Downloads last month
4
Safetensors
Model size
1.85B params
Tensor type
F32
FP16
U8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for vhab10/Llama-3.2-Instruct-3B-TIES

Quantized
(14)
this model