legolasyiu's picture
Update README.md
bbea92b verified
|
raw
history blame
No virus
2.9 kB
metadata
base_model: unsloth/Mistral-Nemo-Base-2407-bnb-4bit
language:
  - en
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - mistral
  - trl

Fireball-Mistral-Nemo-12B-Philos

Supervised Fined tuned by dataset of philosophy, math, coding and languages.

Original Model Card

Model Card for Mistral-Nemo-Instruct-2407

The Mistral-Nemo-Instruct-2407 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-Nemo-Base-2407. Trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.

For more details about this model please refer to our release blog post.

Key features

  • Released under the Apache 2 License
  • Pre-trained and instructed versions
  • Trained with a 128k context window
  • Trained on a large proportion of multilingual and code data
  • Drop-in replacement of Mistral 7B

Model Architecture

Mistral Nemo is a transformer model, with the following architecture choices:

  • Layers: 40
  • Dim: 5,120
  • Head dim: 128
  • Hidden dim: 14,336
  • Activation Function: SwiGLU
  • Number of heads: 32
  • Number of kv-heads: 8 (GQA)
  • Vocabulary size: 2**17 ~= 128k
  • Rotary embeddings (theta = 1M)

Mistral Inference

Install

It is recommended to use mistralai/Mistral-Nemo-Base-2407 with mistral-inference. For HF transformers code snippets, please keep scrolling.

pip install mistral_inference

Transformers

NOTE: Until a new release has been made, you need to install transformers from source:

pip install git+https://github.com/huggingface/transformers.git

If you want to use Hugging Face transformers to generate text, you can do something like this.

from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "EpistemeAI2/Fireball-Mistral-Nemo-12B-Philos"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("Hello my name is", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Unlike previous Mistral models, Mistral Nemo requires smaller temperatures. We recommend to use a temperature of 0.3.

Uploaded model

  • Developed by: EpistemeAI
  • License: apache-2.0
  • Finetuned from model : unsloth/Mistral-Nemo-Base-2407-bnb-4bit

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.