--- license: apache-2.0 tags: - mixture of experts - moe - merge - mergekit - mistralai/Mistral-7B-Instruct-v0.2 - nvidia/OpenMath-Mistral-7B-v0.1-hf base_model: - mistralai/Mistral-7B-Instruct-v0.2 - nvidia/OpenMath-Mistral-7B-v0.1-hf --- # mistral_2x7b_v0.1 mistral_2x7b_v0.1 is a Mixure of Experts (MoE) made with the following models using [mergekit-moe](https://github.com/arcee-ai/mergekit/blob/main/docs/moe.md): * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [nvidia/OpenMath-Mistral-7B-v0.1-hf](https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf) ## 🧩 Configuration ```yamlbase_model: mistralai/Mistral-7B-v0.1 gate_mode: hidden # one of "hidden", "cheap_embed", or "random" dtype: bfloat16 # output dtype (float32, float16, or bfloat16) experts: - source_model: mistralai/Mistral-7B-Instruct-v0.2 positive_prompts: - "What are some fun activities to do in Seattle?" - "What are the potential long-term economic impacts of raising the minimum wage?" - source_model: nvidia/OpenMath-Mistral-7B-v0.1-hf positive_prompts: - "What is 27 * 49? Show your step-by-step work." - "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?" ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "HachiML/mistral_2x7b_v0.1" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```