Edit model card
drawing

Llama-3-Teal-Instruct-2x8B-MoE

This is a experimental MoE created from meta-llama/Meta-Llama-3-8B-Instruct and nvidia/Llama3-ChatQA-1.5-8B using Mergekit.

Green + Blue = Teal.

Mergekit yaml file:

base_model: Meta-Llama-3-8B-Instruct
experts:
  - source_model: Meta-Llama-3-8B-Instruct
    positive_prompts:
    - "explain"
    - "chat"
    - "assistant"
  - source_model: Llama3-ChatQA-1.5-8B
    positive_prompts:
    - "python"
    - "math"
    - "solve"
    - "code"
gate_mode: hidden
dtype: float16
Downloads last month
4
Safetensors
Model size
13.7B params
Tensor type
FP16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for RDson/Llama-3-Teal-Instruct-2x8B-MoE

Quantizations
1 model