Edit model card

TIES-Merging

TIES-Merging is a merge of the following models using LazyMergekit:

🧩 Configuration

models:
  - model: mistralai/Mistral-7B-Instruct-v0.2
    # no parameters necessary for base model
  - model: Open-Orca/Mistral-7B-OpenOrca
    parameters:
      density: 0.5
      weight: 0.5
  - model: openchat/openchat-3.5-0106
    parameters:
      density: 0.5
      weight: 0.5
  - model: WizardLM/WizardMath-7B-V1.1
    parameters:
      density: 0.5
      weight: 0.5
merge_method: ties
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
  normalize: true
dtype: float16

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "Cartinoe5930/TIES-Merging"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
1,237
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.