File size: 2,284 Bytes
e7a566f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
# Simple QLoRA Model Inference

This guide demonstrates how to perform inference using a QLoRA (Quantized Low-Rank Adaptation) fine-tuned model with a single code cell.

## Requirements

- Python 3.7+
- PyTorch
- Transformers
- PEFT (Parameter-Efficient Fine-Tuning)
- bitsandbytes

Install the required packages:

```

pip install torch transformers peft bitsandbytes

```

## Inference Code

Copy and paste the following code into a Python script or Jupyter notebook cell:

```python

import torch

from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig

from peft import PeftModel



# Set up model paths

BASE_MODEL_PATH = "path/to/your/base/model"

ADAPTER_PATH = "path/to/your/qlora/adapter"



# Load tokenizer

tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL_PATH)

tokenizer.pad_token = tokenizer.eos_token

tokenizer.padding_side = "right"



# Load quantized model with adapter

bnb_config = BitsAndBytesConfig(

    load_in_4bit=True,

    bnb_4bit_quant_type="nf4",

    bnb_4bit_compute_dtype=torch.float16,

)

model = AutoModelForCausalLM.from_pretrained(

    BASE_MODEL_PATH,

    quantization_config=bnb_config,

    device_map="auto"

)

model = PeftModel.from_pretrained(model, ADAPTER_PATH)



# Generate text

prompt = "Explain quantum computing in simple terms:"

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=100, do_sample=True, temperature=0.7)

generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)



print(generated_text)

```

## Usage

1. Replace `BASE_MODEL_PATH` with the path to your base model.
2. Replace `ADAPTER_PATH` with the path to your QLoRA adapter.
3. Modify the `prompt` variable to use your desired input text.
4. Run the code cell.

## Customization

- Adjust `max_new_tokens`, `temperature`, and other generation parameters in the `model.generate()` function call to control the output.

## Troubleshooting

- If you encounter CUDA out-of-memory errors, try reducing `max_new_tokens` or using a smaller model.
- Ensure your GPU drivers and CUDA toolkit are up-to-date.

For more advanced usage or optimizations, refer to the Hugging Face documentation for Transformers and PEFT.