DipeshChaudhary's picture
Update README.md
1e5e798 verified
|
raw
history blame
No virus
2.86 kB
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# To Use This Model
**STEP 1:**
- Installs Unsloth, Xformers (Flash Attention) and all other packages! according to your environments and GPU
- To install Unsloth on your own computer, follow the installation instructions on our Github page : [LINK IS HERE](https://github.com/unslothai/unsloth#installation-instructions---conda)
**Now Follow the CODE**
```
from unsloth import FastLanguageModel
```
```
import torch
max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
from transformers import AutoTokenizer
```
```
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="DipeshChaudhary/ShareGPTChatBot-Counselchat1", # Your fine-tuned model
max_seq_length=max_seq_length,
dtype=dtype,
load_in_4bit=load_in_4bit,
)
```
# We now use the Llama-3 format for conversation style finetunes. We use Open Assistant conversations in ShareGPT style.
**We use our get_chat_template function to get the correct chat template. They support zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old and their own optimized unsloth template**
```
from unsloth.chat_templates import get_chat_template
tokenizer = get_chat_template(
tokenizer,
chat_template = "llama-3", # Supports zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old, unsloth
mapping = {"role" : "from", "content" : "value", "user" : "human", "assistant" : "gpt"}, # ShareGPT style
)
```
## FOR ACTUAL INFERENCE
```
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
messages = [
{"from": "human", "value": "I'm worry about my exam."},
]
inputs = tokenizer.apply_chat_template(
messages,
tokenize = True,
add_generation_prompt = True, # Must add for generation
return_tensors = "pt",
).to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
x= model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 128, use_cache = True)
```
# Uploaded model
- **Developed by:** DipeshChaudhary
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)