Edit model card

Overview

The Mixtral-7x8B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-7x8Boutperforms Llama 2 70B on most benchmarks we tested.

Variants

No Variant Cortex CLI command
1 7x8b-gguf cortex run mixtral:7x8b-gguf

Use it with Jan (UI)

  1. Install Jan using Quickstart
  2. Use in Jan model Hub:
    cortexhub/mixtral
    

Use it with Cortex (CLI)

  1. Install Cortex using Quickstart
  2. Run the model with command:
    cortex run mixtral
    

Credits

Downloads last month
5
GGUF
Model size
46.7B params
Architecture
llama
Inference API
Unable to determine this model's library. Check the docs .

Collection including cortexso/mixtral