AashishKumar's picture
Update README.md
6766081 verified
metadata
base_model: cognitivecomputations/dolphin-2.9.3-llama-3-8b
language:
  - en
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - trl
  - sft

Uploaded model

  • Developed by: AashishKumar
  • License: apache-2.0
  • Finetuned from model : cognitivecomputations/dolphin-2.9.3-llama-3-8b
from transformers import AutoTokenizer, LlamaForCausalLM

model = LlamaForCausalLM.from_pretrained("otonomy/Cn_2_9_3_Hinglish_llama3_7b_8kAk")
tokenizer = AutoTokenizer.from_pretrained("otonomy/Cn_2_9_3_Hinglish_llama3_7b_8kAK")

prompt = "ky tumhe la la land pasand hai?"
inputs = tokenizer(prompt, return_tensors="pt")

# Generate
generate_ids = model.generate(inputs.input_ids, max_length=30)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
# Use a pipeline as a high-level helper
from transformers import pipeline

messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="otonomy/Cn_2_9_3_Hinglish_llama3_7b_8kAk")
pipe(messages)