Mustain commited on
Commit
81dcbc3
1 Parent(s): fcca72b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -1
README.md CHANGED
@@ -19,6 +19,11 @@ For more details, please refer to [our blog post](https://docs.google.com/docume
19
  ### Usage
20
 
21
  ```python
 
 
 
 
 
22
  import torch
23
  from transformers import AutoModelForCausalLM, BitsAndBytesConfig, AutoTokenizer
24
 
@@ -41,6 +46,8 @@ if tokenizer.pad_token is None:
41
  tokenizer.pad_token = tokenizer.eos_token
42
 
43
  model.eval()
 
 
44
  alpaca_prompt = """以下は、タスクを説明する指示と、さらに詳しいコンテキストを提供する入力を組み合わせたものです。要求を適切に完了する応答を記述してください。
45
 
46
  ### 説明書:
@@ -67,14 +74,38 @@ from transformers import TextStreamer
67
  text_streamer = TextStreamer(tokenizer)
68
  _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 1028)
69
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
 
71
  ### Developers
72
 
73
  Listed in alphabetical order.
74
 
 
75
  - [Mustain Billah](https://huggingface.co/Mustain)
76
  - [Shugo Saito](https://huggingface.co/shugo3110)
77
- - [Leo Uno](https://huggingface.co/leouno12)
78
 
79
  ### License
80
 
 
19
  ### Usage
20
 
21
  ```python
22
+ # make sure you are logged in huggingface
23
+ hf_token = "" # your huggingface token
24
+ from huggingface_hub import login
25
+ login()
26
+
27
  import torch
28
  from transformers import AutoModelForCausalLM, BitsAndBytesConfig, AutoTokenizer
29
 
 
46
  tokenizer.pad_token = tokenizer.eos_token
47
 
48
  model.eval()
49
+ # for MCQ set generation
50
+
51
  alpaca_prompt = """以下は、タスクを説明する指示と、さらに詳しいコンテキストを提供する入力を組み合わせたものです。要求を適切に完了する応答を記述してください。
52
 
53
  ### 説明書:
 
74
  text_streamer = TextStreamer(tokenizer)
75
  _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 1028)
76
  ```
77
+ #for QA generation
78
+ # Define the formatting function and the prompt template
79
+ alpaca_prompt = """以下は質問です。質問に適切に答える回答を書いてください。
80
+
81
+ ### 質問:
82
+ {}
83
+
84
+ ### 答え:
85
+ {}"""
86
+
87
+ eos_token_id = tokenizer.eos_token_id
88
+
89
+ inputs = tokenizer(
90
+ [alpaca_prompt.format(
91
+ "介護福祉士はどのような責任を負うべきですか?", # Question
92
+ "" # Answer - leave this blank for generation!
93
+ )],
94
+ return_tensors="pt"
95
+ ).to("cuda")
96
+
97
+ from transformers import TextStreamer
98
+ text_streamer = TextStreamer(tokenizer)
99
+ _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 1028)
100
 
101
  ### Developers
102
 
103
  Listed in alphabetical order.
104
 
105
+ - [Leo Uno](https://huggingface.co/leouno12)
106
  - [Mustain Billah](https://huggingface.co/Mustain)
107
  - [Shugo Saito](https://huggingface.co/shugo3110)
108
+
109
 
110
  ### License
111