CooperW commited on
Commit
c40049b
1 Parent(s): cc10443

Delete Readme.md

Browse files
Files changed (1) hide show
  1. Readme.md +0 -138
Readme.md DELETED
@@ -1,138 +0,0 @@
1
- # Project SecureAi Labs
2
-
3
- This project is designed for fine-tuning language models using the Unsloth library with LoRA adapters, and it provides utilities for training, testing, and formatting data for various models like Phi-3, Gemma, and Meta-Llama.
4
-
5
- ## Table of Contents
6
- 1. [Prerequisites](#prerequisites)
7
- 2. [File Descriptions](#file-descriptions)
8
- - [TRAINER.ipynb](#traineripynb)
9
- - [TESTER.ipynb](#testeripynb)
10
- - [dataFormat.ipynb](#dataformatipynb)
11
- 3. [Usage](#usage)
12
- - [Environment Setup](#environment-setup)
13
- - [Training a Model](#training-a-model)
14
- - [Testing the Model](#testing-the-model)
15
- - [Formatting Data](#formatting-data)
16
- 4. [Additional Resources](#additional-resources)
17
-
18
- ---
19
-
20
- ## Prerequisites
21
-
22
- Before running the project, ensure you have the following:
23
- - A [Hugging Face](https://huggingface.co) account and token.
24
- - Google Colab or a local environment with Python 3.x and CUDA support.
25
- - Installed packages like `unsloth`, `huggingface_hub`, `peft`, `trl`, and others (automatically installed in the notebooks).
26
-
27
- NOTE GPU Requirements:
28
-
29
- ```python
30
- models = [
31
- 'Phi-3.5-mini-instruct-bnb-4bit', # |Min Training Gpu : T4, Min Testing GPU: T4, Max Model size : 14.748 GB|
32
- 'gemma-2-27b-it-bnb-4bit', # |Min Training Gpu: A100, Min Testing GPU: L4, Max Model size: 39.564GB|
33
- 'Meta-Llama-3.1-8B-Instruct-bnb-4bit' # |Min Training Gpu: T4, Min Testing GPU: T4, Max Model size : 22.168GB|
34
- ]
35
- ```
36
-
37
- Refer to the [Unsloth Documentation](https://unsloth.ai/) for more details.
38
-
39
- ## File Descriptions
40
-
41
- ### 1. `TRAINER.ipynb`
42
-
43
- This notebook is responsible for training a language model with LoRA adapters using the Unsloth library. The core functionality includes:
44
- - Loading a pre-trained model from Hugging Face using `FastLanguageModel`.
45
- - Attaching LoRA adapters for efficient fine-tuning of large models.
46
- - Setting training configurations (e.g., learning rate, number of epochs, batch size) using the `SFTTrainer` from the `transformers` library.
47
- - Optionally, resuming training from the last checkpoint.
48
- - Uploading checkpoints and models to Hugging Face during or after training.
49
-
50
- #### How to Use:
51
- 1. Open this notebook in Google Colab or a similar environment.
52
- 2. Ensure you have set up your Hugging Face token (refer to the section below for setup).
53
- 3. Customize the training parameters if needed.
54
- 4. Run the notebook cells to train the model.
55
-
56
- ### 2. `TESTER.ipynb`
57
-
58
- This notebook handles the evaluation of a fine-tuned model. It allows testing the model's accuracy and efficiency on a test dataset using pre-defined metrics like accuracy, precision, recall, and F1 score. It provides the following functionalities:
59
- - Loads the fine-tuned model with its LoRA adapters.
60
- - Defines a function to evaluate the model's predictions on a test dataset.
61
- - Outputs accuracy and other classification metrics.
62
- - Displays confusion matrices for better insight into model performance.
63
-
64
- #### How to Use:
65
- 1. Load this notebook in your environment.
66
- 2. Specify the test dataset and model details.
67
- 3. Run the evaluation loop to get accuracy, predictions, and metrics visualizations.
68
-
69
- ### 3. `dataFormat.ipynb`
70
-
71
- This notebook formats datasets into the correct structure for training and testing models. It provides functionality to map raw text data into a format suitable for language model training, particularly for multi-turn conversations:
72
- - Formats conversations into a chat-based template using Unsloth's `chat_templates`.
73
- - Maps data fields like "role", "content", and user/assistant conversations.
74
- - Prepares the dataset for tokenization and input to the model.
75
-
76
- #### How to Use:
77
- 1. Open the notebook and specify the dataset you wish to format.
78
- 2. Adjust any template settings based on the model you're using.
79
- 3. Run the notebook to output the formatted dataset.
80
-
81
- ---
82
-
83
- ## Usage
84
-
85
- ### Environment Setup
86
-
87
- 1. **Install Unsloth**:
88
- The following command is included in the notebooks to install Unsloth:
89
- ```bash
90
- !pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
91
- ```
92
-
93
- 2. **Install Additional Dependencies**:
94
- These dependencies are also required:
95
- ```bash
96
- !pip install --no-deps xformers==0.0.27 trl peft accelerate bitsandbytes triton
97
- ```
98
-
99
- 3. **Hugging Face Token Setup**:
100
- - Add your Hugging Face token as an environment variable in Google Colab or in your local environment.
101
- - Use the Hugging Face token to download models and upload checkpoints:
102
- ```python
103
- from google.colab import userdata
104
- from huggingface_hub import login
105
- login(userdata.get('TOKEN'))
106
- ```
107
-
108
- ### Training a Model
109
-
110
- 1. Open `TRAINER.ipynb`.
111
- 2. Customize the model, template, and LoRA settings in the notebook.
112
- 3. Set training configurations (e.g., epochs, learning rate).
113
- 4. Run the notebook to start the training process.
114
-
115
- The model will automatically be saved at checkpoints and uploaded to Hugging Face.
116
-
117
- ### Testing the Model
118
-
119
- 1. Load `TESTER.ipynb` in your environment.
120
- 2. Load the fine-tuned model with LoRA adapters.
121
- 3. Specify a test dataset in the appropriate format.
122
- 4. Run the evaluation function to get predictions, accuracy, and other metrics.
123
-
124
- ### Formatting Data
125
-
126
- 1. Use `dataFormat.ipynb` to format raw data into a training-friendly structure.
127
- 2. Map the conversation fields using the `formatting_prompts_func`.
128
- 3. Output the formatted data and use it in the training or testing notebooks.
129
-
130
- ---
131
-
132
- ## Additional Resources
133
-
134
- - Unsloth Documentation: [Unsloth.ai](https://unsloth.ai/)
135
- - Hugging Face Security Tokens: [Hugging Face Tokens](https://huggingface.co/docs/hub/en/security-tokens)
136
- - For issues, please refer to each library's official documentation or GitHub pages.
137
-
138
- ---