doberst commited on
Commit
e86a6a9
1 Parent(s): e3f7f60

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -0
README.md CHANGED
@@ -81,6 +81,30 @@ To get the best results, package "my_prompt" as follows:
81
  my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
82
 
83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84
  ## Citation [optional]
85
 
86
  This BLING model is built on top of a Cerebras base GPT trained model - for more information about the Cerebras GPT models, please see the following paper:
 
81
  my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
82
 
83
 
84
+
85
+ If you are using a HuggingFace generation script:
86
+
87
+ # prepare prompt packaging used in fine-tuning process
88
+ new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"
89
+
90
+ inputs = tokenizer(new_prompt, return_tensors="pt")
91
+ start_of_output = len(inputs.input_ids[0])
92
+
93
+ # temperature: set at 0.3 for consistency of output
94
+ # max_new_tokens: set at 100 - may prematurely stop a few of the summaries
95
+
96
+ outputs = model.generate(
97
+ inputs.input_ids.to(device),
98
+ eos_token_id=tokenizer.eos_token_id,
99
+ pad_token_id=tokenizer.eos_token_id,
100
+ do_sample=True,
101
+ temperature=0.3,
102
+ max_new_tokens=100,
103
+ )
104
+
105
+ output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)
106
+
107
+
108
  ## Citation [optional]
109
 
110
  This BLING model is built on top of a Cerebras base GPT trained model - for more information about the Cerebras GPT models, please see the following paper: