Edit model card

Llama-3-8B-Lexi-Smaug-Uncensored

Llama-3-8B-Lexi-Smaug-Uncensored is a merge of the following models using LazyMergekit:

πŸ‘€ Looking for GGUF?

Static quants are available at https://huggingface.co/mradermacher/Llama-3-8B-Lexi-Smaug-Uncensored-GGUF

Weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-8B-Lexi-Smaug-Uncensored-i1-GGUF

🧩 Configuration

slices:
  - sources:
      - model: Orenguteng/Llama-3-8B-Lexi-Uncensored
        layer_range: [0, 32]
      - model: abacusai/Llama-3-Smaug-8B
        layer_range: [0, 32]
merge_method: slerp
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "theprint/Llama-3-8B-Lexi-Smaug-Uncensored"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
235
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Merge of

Space using theprint/Llama-3-8B-Lexi-Smaug-Uncensored 1

Collection including theprint/Llama-3-8B-Lexi-Smaug-Uncensored