CooperW commited on
Commit
cc10443
1 Parent(s): 6a67fae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +141 -3
README.md CHANGED
@@ -1,3 +1,141 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # Project SecureAi Labs
5
+
6
+ This project is designed for fine-tuning language models using the Unsloth library with LoRA adapters, and it provides utilities for training, testing, and formatting data for various models like Phi-3, Gemma, and Meta-Llama.
7
+
8
+ ## Table of Contents
9
+ 1. [Prerequisites](#prerequisites)
10
+ 2. [File Descriptions](#file-descriptions)
11
+ - [TRAINER.ipynb](#traineripynb)
12
+ - [TESTER.ipynb](#testeripynb)
13
+ - [dataFormat.ipynb](#dataformatipynb)
14
+ 3. [Usage](#usage)
15
+ - [Environment Setup](#environment-setup)
16
+ - [Training a Model](#training-a-model)
17
+ - [Testing the Model](#testing-the-model)
18
+ - [Formatting Data](#formatting-data)
19
+ 4. [Additional Resources](#additional-resources)
20
+
21
+ ---
22
+
23
+ ## Prerequisites
24
+
25
+ Before running the project, ensure you have the following:
26
+ - A [Hugging Face](https://huggingface.co) account and token.
27
+ - Google Colab or a local environment with Python 3.x and CUDA support.
28
+ - Installed packages like `unsloth`, `huggingface_hub`, `peft`, `trl`, and others (automatically installed in the notebooks).
29
+
30
+ NOTE GPU Requirements:
31
+
32
+ ```python
33
+ models = [
34
+ 'Phi-3.5-mini-instruct-bnb-4bit', # |Min Training Gpu : T4, Min Testing GPU: T4, Max Model size : 14.748 GB|
35
+ 'gemma-2-27b-it-bnb-4bit', # |Min Training Gpu: A100, Min Testing GPU: L4, Max Model size: 39.564GB|
36
+ 'Meta-Llama-3.1-8B-Instruct-bnb-4bit' # |Min Training Gpu: T4, Min Testing GPU: T4, Max Model size : 22.168GB|
37
+ ]
38
+ ```
39
+
40
+ Refer to the [Unsloth Documentation](https://unsloth.ai/) for more details.
41
+
42
+ ## File Descriptions
43
+
44
+ ### 1. `TRAINER.ipynb`
45
+
46
+ This notebook is responsible for training a language model with LoRA adapters using the Unsloth library. The core functionality includes:
47
+ - Loading a pre-trained model from Hugging Face using `FastLanguageModel`.
48
+ - Attaching LoRA adapters for efficient fine-tuning of large models.
49
+ - Setting training configurations (e.g., learning rate, number of epochs, batch size) using the `SFTTrainer` from the `transformers` library.
50
+ - Optionally, resuming training from the last checkpoint.
51
+ - Uploading checkpoints and models to Hugging Face during or after training.
52
+
53
+ #### How to Use:
54
+ 1. Open this notebook in Google Colab or a similar environment.
55
+ 2. Ensure you have set up your Hugging Face token (refer to the section below for setup).
56
+ 3. Customize the training parameters if needed.
57
+ 4. Run the notebook cells to train the model.
58
+
59
+ ### 2. `TESTER.ipynb`
60
+
61
+ This notebook handles the evaluation of a fine-tuned model. It allows testing the model's accuracy and efficiency on a test dataset using pre-defined metrics like accuracy, precision, recall, and F1 score. It provides the following functionalities:
62
+ - Loads the fine-tuned model with its LoRA adapters.
63
+ - Defines a function to evaluate the model's predictions on a test dataset.
64
+ - Outputs accuracy and other classification metrics.
65
+ - Displays confusion matrices for better insight into model performance.
66
+
67
+ #### How to Use:
68
+ 1. Load this notebook in your environment.
69
+ 2. Specify the test dataset and model details.
70
+ 3. Run the evaluation loop to get accuracy, predictions, and metrics visualizations.
71
+
72
+ ### 3. `dataFormat.ipynb`
73
+
74
+ This notebook formats datasets into the correct structure for training and testing models. It provides functionality to map raw text data into a format suitable for language model training, particularly for multi-turn conversations:
75
+ - Formats conversations into a chat-based template using Unsloth's `chat_templates`.
76
+ - Maps data fields like "role", "content", and user/assistant conversations.
77
+ - Prepares the dataset for tokenization and input to the model.
78
+
79
+ #### How to Use:
80
+ 1. Open the notebook and specify the dataset you wish to format.
81
+ 2. Adjust any template settings based on the model you're using.
82
+ 3. Run the notebook to output the formatted dataset.
83
+
84
+ ---
85
+
86
+ ## Usage
87
+
88
+ ### Environment Setup
89
+
90
+ 1. **Install Unsloth**:
91
+ The following command is included in the notebooks to install Unsloth:
92
+ ```bash
93
+ !pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
94
+ ```
95
+
96
+ 2. **Install Additional Dependencies**:
97
+ These dependencies are also required:
98
+ ```bash
99
+ !pip install --no-deps xformers==0.0.27 trl peft accelerate bitsandbytes triton
100
+ ```
101
+
102
+ 3. **Hugging Face Token Setup**:
103
+ - Add your Hugging Face token as an environment variable in Google Colab or in your local environment.
104
+ - Use the Hugging Face token to download models and upload checkpoints:
105
+ ```python
106
+ from google.colab import userdata
107
+ from huggingface_hub import login
108
+ login(userdata.get('TOKEN'))
109
+ ```
110
+
111
+ ### Training a Model
112
+
113
+ 1. Open `TRAINER.ipynb`.
114
+ 2. Customize the model, template, and LoRA settings in the notebook.
115
+ 3. Set training configurations (e.g., epochs, learning rate).
116
+ 4. Run the notebook to start the training process.
117
+
118
+ The model will automatically be saved at checkpoints and uploaded to Hugging Face.
119
+
120
+ ### Testing the Model
121
+
122
+ 1. Load `TESTER.ipynb` in your environment.
123
+ 2. Load the fine-tuned model with LoRA adapters.
124
+ 3. Specify a test dataset in the appropriate format.
125
+ 4. Run the evaluation function to get predictions, accuracy, and other metrics.
126
+
127
+ ### Formatting Data
128
+
129
+ 1. Use `dataFormat.ipynb` to format raw data into a training-friendly structure.
130
+ 2. Map the conversation fields using the `formatting_prompts_func`.
131
+ 3. Output the formatted data and use it in the training or testing notebooks.
132
+
133
+ ---
134
+
135
+ ## Additional Resources
136
+
137
+ - Unsloth Documentation: [Unsloth.ai](https://unsloth.ai/)
138
+ - Hugging Face Security Tokens: [Hugging Face Tokens](https://huggingface.co/docs/hub/en/security-tokens)
139
+ - For issues, please refer to each library's official documentation or GitHub pages.
140
+
141
+ ---