File size: 2,892 Bytes
bb68d98
 
 
 
 
 
 
 
 
 
 
 
 
22ecf09
 
 
1516c98
22ecf09
 
c99e77d
1516c98
 
 
91f9ce3
b623db3
6eba0be
 
91f9ce3
88b16e1
c99e77d
 
 
 
88b16e1
 
c99e77d
 
 
 
 
 
88b16e1
1e5e798
 
 
c99e77d
 
 
 
 
 
88b16e1
1e5e798
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22ecf09
bb68d98
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---


# To Use This Model 

  # STEP 1:*
  - Installs Unsloth, Xformers (Flash Attention) and all other packages! according to your environments and GPU
  - To install Unsloth on your own computer, follow the installation instructions on our Github page : [LINK IS HERE](https://github.com/unslothai/unsloth#installation-instructions---conda)
  
  # STEP 2: Now Follow the CODES
  **LOAD THE MODEL**
  
  ```
  from unsloth import FastLanguageModel
  ```
  
  ```
    import torch
    max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
    dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
    load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
    from transformers import AutoTokenizer
  ```
  ```
    model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="DipeshChaudhary/ShareGPTChatBot-Counselchat1",  # Your fine-tuned model
    max_seq_length=max_seq_length,
    dtype=dtype,
    load_in_4bit=load_in_4bit,
    )
  ```
   # We now use the Llama-3 format for conversation style finetunes. We use Open Assistant conversations in ShareGPT style.
   **We use our get_chat_template function to get the correct chat template. They support zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old and their own optimized unsloth template**
  ``` 
    from unsloth.chat_templates import get_chat_template
    tokenizer = get_chat_template(
    tokenizer,
    chat_template = "llama-3", # Supports zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old, unsloth
    mapping = {"role" : "from", "content" : "value", "user" : "human", "assistant" : "gpt"}, # ShareGPT style
    )
  ```
## FOR ACTUAL INFERENCE
  ```
    FastLanguageModel.for_inference(model) # Enable native 2x faster inference

    messages = [
        {"from": "human", "value": "I'm worry about my exam."},
    ]
    inputs = tokenizer.apply_chat_template(
        messages,
        tokenize = True,
        add_generation_prompt = True, # Must add for generation
        return_tensors = "pt",
    ).to("cuda")

    from transformers import TextStreamer
    text_streamer = TextStreamer(tokenizer)
    x= model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 128, use_cache = True)
  ```

# Uploaded  model

- **Developed by:** DipeshChaudhary
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)