shahidul034 arbitropy commited on
Commit
02d9cc5
1 Parent(s): 8004ade

Create README.md (#1)

Browse files

- Create README.md (a50f90d63a57f3a48dff7754891d79fb2172723b)


Co-authored-by: Tanim Jalal <arbitropy@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ ---
5
+ KUETLLM is a [zephyr7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) finetune, using a dataset with prompts and answers about Khulna University of Engineering and Technology.
6
+ It was loaded in 8 bit quantization using [bitsandbytes](https://github.com/TimDettmers/bitsandbytes). [LORA](https://huggingface.co/docs/diffusers/main/en/training/lora) was used to finetune an adapter, which was leter merged with the base unquantized model.
7
+
8
+ Below is the training configuarations for the finetuning process:
9
+ ```
10
+ LoraConfig:
11
+ r=16,
12
+ lora_alpha=16,
13
+ target_modules=["q_proj", "v_proj","k_proj","o_proj","gate_proj","up_proj","down_proj"],
14
+ lora_dropout=0.05,
15
+ bias="none",
16
+ task_type="CAUSAL_LM"
17
+ ```
18
+ ```
19
+ TrainingArguments:
20
+ per_device_train_batch_size=12,
21
+ gradient_accumulation_steps=1,
22
+ optim='paged_adamw_8bit',
23
+ learning_rate=5e-06 ,
24
+ fp16=True,
25
+ logging_steps=10,
26
+ num_train_epochs = 1,
27
+ output_dir=zephyr_lora_output,
28
+ remove_unused_columns=False,
29
+ ```
30
+
31
+ ## Inferencing:
32
+ ```
33
+ def process_data_sample(example):
34
+ processed_example = "<|system|>\nYou are a KUET authority managed chatbot, help users by answering their queries about KUET.\n<|user|>\n" + example + "\n<|assistant|>\n"
35
+ return processed_example
36
+
37
+ inp_str = process_data_sample("Tell me about KUET.")
38
+ inputs = tokenizer(inp_str, return_tensors="pt")
39
+ generation_config = GenerationConfig(
40
+ do_sample=True,
41
+ top_k=1,
42
+ temperature=0.1,
43
+ max_new_tokens=256,
44
+ pad_token_id=tokenizer.eos_token_id
45
+ )
46
+
47
+ outputs = model.generate(**inputs, generation_config=generation_config)
48
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
49
+ ```