YasiruDEX commited on
Commit
73ebf30
1 Parent(s): c7279ce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md CHANGED
@@ -59,3 +59,86 @@ model = AutoModelForCausalLM.from_pretrained(
59
 
60
  # Input: Extract important facial and hair details from the description and give them in a comma-separated string. Suspect is a female, likely in her late 20s to early 30s, with a distinctly oval-shaped face. She possesses soft, rounded features, including full lips. Her nose is petite and slightly upturned and has almond-shaped eyes of a deep brown color with long, fluttery eyelashes. She has high cheekbones, and her skin is smooth, unblemished. She has expressive eyebrows, that are gently arched.
61
  # Output: Female, late 20s to early 30s, oval-shaped face, soft rounded features, full lips, petite and slightly upturned nose, almond-shaped deep brown eyes, long fluttery eyelashes, high cheekbones, smooth unblemished skin, expressive gently arched eyebrows.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
 
60
  # Input: Extract important facial and hair details from the description and give them in a comma-separated string. Suspect is a female, likely in her late 20s to early 30s, with a distinctly oval-shaped face. She possesses soft, rounded features, including full lips. Her nose is petite and slightly upturned and has almond-shaped eyes of a deep brown color with long, fluttery eyelashes. She has high cheekbones, and her skin is smooth, unblemished. She has expressive eyebrows, that are gently arched.
61
  # Output: Female, late 20s to early 30s, oval-shaped face, soft rounded features, full lips, petite and slightly upturned nose, almond-shaped deep brown eyes, long fluttery eyelashes, high cheekbones, smooth unblemished skin, expressive gently arched eyebrows.
62
+ ```
63
+
64
+ # Model Fine-tuning and Quantization
65
+
66
+ This model has been fine-tuned for a specific focus on facial detail extraction. It has been fine-tuned and quantized to 4 bits using AutoTrain's advanced capabilities. Here's the code used for fine-tuning and quantization:
67
+
68
+ ``` python
69
+ # Step 1: Setup Environment
70
+ !pip install pandas autotrain-advanced -q
71
+ !autotrain setup --update-torch
72
+
73
+ # Step 2: Connect to HuggingFace for Model Upload
74
+
75
+ from huggingface_hub import notebook_login
76
+ notebook_login()
77
+
78
+ # Step 3: Upload your dataset
79
+ !mv finetune-llama-2/train.csv train.csv
80
+
81
+ import pandas as pd
82
+ df = pd.read_csv("train.csv")
83
+ print(df.head())
84
+
85
+ # Step 4: Overview of AutoTrain command
86
+ !autotrain llm --train --project-name mistral-7b-mj-finetuned \
87
+ --model alexsherstinsky/Mistral-7B-v0.1-sharded --data-path . \
88
+ --use-peft --quantization int4 --batch-size 2 --epochs 3 --trainer sft \
89
+ --target-modules q_proj,v_proj --push-to-hub --username replace_it --token replace_it \
90
+ --lr 2e-4
91
+
92
+ # Optionally check the available options
93
+ !autotrain llm --help
94
+
95
+ # Step 5: Completed 🎉
96
+ # After the command above is completed your Model will be uploaded to Hugging Face.
97
+
98
+ # Step 6: Inference Engine
99
+ !pip install -q peft accelerate bitsandbytes safetensors
100
+
101
+ import torch
102
+ from peft import PeftModel
103
+ from transformers import AutoModelForCausalLM, AutoTokenizer
104
+ import transformers
105
+
106
+ adapters_name = "YasiruDEX/mistral-7b-mj-finetuned-face-feature-extraction"
107
+ model_name = "mistralai/Mistral-7B-Instruct-v0.1" # or your preferred model
108
+
109
+ device = "cuda" # the device to load the model onto
110
+
111
+ bnb_config = transformers.BitsAndBytesConfig(
112
+ load_in_4bit=True,
113
+ bnb_4bit_use_double_quant=True,
114
+ bnb_4bit_quant_type="nf4",
115
+ bnb_4bit_compute_dtype=torch.bfloat16
116
+ )
117
+
118
+ model = AutoModelForCausalLM.from_pretrained(
119
+ model_name,
120
+ load_in_4bit=True,
121
+ torch_dtype=torch.bfloat16,
122
+ device_map='auto'
123
+ )
124
+
125
+ # Step 7: Peft Model Loading with upload model
126
+ model = PeftModel.from_pretrained(model, adapters_name)
127
+
128
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
129
+ tokenizer.bos_token_id = 1
130
+
131
+ print(f"Successfully loaded the model {model_name} into memory")
132
+
133
+ text = "[INST] Extract important facial and hair details from the description and give them in a comma separated string. Suspect is a female, likely in her late 20s to early 30s, with a distinctly oval-shaped face. She possesses soft, rounded features, including full lips. Her nose is petite and slightly upturned and has almond-shaped eyes of a deep brown color with long, fluttery eyelashes. She has high cheekbones, and her skin is smooth, unblemished. She has expressive eyebrows, that are gently arched. [/INST]"
134
+
135
+ encoded = tokenizer(text, return_tensors="pt", add_special_tokens=False)
136
+ model_input = encoded
137
+ model.to(device)
138
+ generated_ids = model.generate(**model_input, max_new_tokens=200, do_sample=True)
139
+ decoded = tokenizer.batch_decode(generated_ids)
140
+ print(decoded[0])
141
+ ```
142
+
143
+
144
+ This additional information provides insights into the model's training process, including its fine-tuning and quantization steps.