Minerva-MoE-3x3B
Minerva-MoE-3x3B is a Mixture of Experts (MoE) made with the following models using LazyMergekit:
🧩 Configuration
base_model: sapienzanlp/Minerva-3B-base-v1.0
experts:
- source_model: sapienzanlp/Minerva-3B-base-v1.0
positive_prompts:
- "ciao"
- "chat"
- "parlare"
- source_model: DeepMount00/Minerva-3B-base-RAG
positive_prompts:
- "rispondi a domande"
- "cosa è"
- "chi è"
- "dove è"
- "come si"
- "spiegami"
- "definisci"
- source_model: FairMind/Minerva-3B-Instruct-v1.0
positive_prompts:
- "istruzione"
- "input"
- "risposta"
- "scrivi"
- "sequenza"
- "istruzioni"
dtype: bfloat16
💻 Usage
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "ludocomito/Minerva-MoE-3x3B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
- Downloads last month
- 1,733
This model does not have enough activity to be deployed to Inference API (serverless) yet.
Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.