Fan21 commited on
Commit
b9dda93
1 Parent(s): 292de2b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -1
README.md CHANGED
@@ -9,4 +9,22 @@ pipeline_tag: question-answering
9
  <!-- Provide a quick summary of what the model is/does. -->
10
 
11
  This model is fine-tuned with LLaMA with 8 Nvidia RTX 1080Ti GPUs and enhanced with conversation safety policies (e.g., threat, profanity, identity attack) using 3,000,000 math discussion posts by students and facilitators on Algebra Nation (https://www.mathnation.com/). SafeMathBot consists of 48 layers and over 1.5 billion parameters, consuming up to 6 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct math conversational AI that can effectively avoid unsafe response generation. It was trained to allow researchers to control generated responses' safety using tags [SAFE] and [UNSAFE]
12
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
 
11
  This model is fine-tuned with LLaMA with 8 Nvidia RTX 1080Ti GPUs and enhanced with conversation safety policies (e.g., threat, profanity, identity attack) using 3,000,000 math discussion posts by students and facilitators on Algebra Nation (https://www.mathnation.com/). SafeMathBot consists of 48 layers and over 1.5 billion parameters, consuming up to 6 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct math conversational AI that can effectively avoid unsafe response generation. It was trained to allow researchers to control generated responses' safety using tags [SAFE] and [UNSAFE]
12
+ ### Here is how to use it with texts in HuggingFace
13
+ ```python
14
+ # A list of special tokens the model was trained with
15
+ special_tokens_dict = {
16
+ 'additional_special_tokens': [
17
+ '[SAFE]','[UNSAFE]', '[OK]', '[SELF_M]','[SELF_F]', '[SELF_N]',
18
+ '[PARTNER_M]', '[PARTNER_F]', '[PARTNER_N]',
19
+ '[ABOUT_M]', '[ABOUT_F]', '[ABOUT_N]', '<speaker1>', '<speaker2>'
20
+ ],
21
+ 'bos_token': '<bos>',
22
+ 'eos_token': '<eos>',
23
+ }
24
+ from transformers import AutoTokenizer, AutoModelForCausalLM
25
+ math_bot_tokenizer = AutoTokenizer.from_pretrained('uf-aice-lab/SafeMathBot')
26
+ safe_math_bot = AutoModelForCausalLM.from_pretrained('uf-aice-lab/SafeMathBot')
27
+ text = "Replace me by any text you'd like."
28
+ encoded_input = math_bot_tokenizer(text, return_tensors='pt')
29
+ output = safe_math_bot(**encoded_input)
30
+ ```