File size: 3,894 Bytes
8bfcdb1
 
 
 
 
f9a2323
8bfcdb1
f9a2323
8bfcdb1
f9a2323
8bfcdb1
f9a2323
fcca72b
 
 
8bfcdb1
9114689
8bfcdb1
0a52e6c
 
 
81dcbc3
 
 
 
 
0a52e6c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aa3aa9b
81dcbc3
 
0a52e6c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aa3aa9b
81dcbc3
aa3aa9b
81dcbc3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aa3aa9b
8bfcdb1
73e3dd9
 
 
 
81dcbc3
73e3dd9
 
81dcbc3
73e3dd9
 
 
 
 
 
 
 
0afac74
73e3dd9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
---
library_name: transformers
tags: []
---

## Llama-3-SURPASSONE-JP-8B

![Llama-3-SURPASSONE-JP-8B-image](./visual.png)

### Model Description

**Llama-3-SURPASSONE-JP-8B** is a large language model trained by [SURPASSONE, Inc](https://surpassone.com/). 
Based on [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), this model has undergone additional post-training of Japanese to expand instruction-following capabilities in Japanese.

This model is specialized in generating MCQ questions with options, correct answers and corresponding explanations on “Nursing Care” if given a specific topic. Also, it can generate answers based on questions on “Nursing Care”.

For more details, please refer to [our blog post](https://docs.google.com/document/d/1ENAEzgV3n-sFiSoV3oQBTgzjeyTfmL64zEczepTKEW0/edit?usp=sharing).

### Usage

```python
# make sure you are logged in huggingface
hf_token = "" # your huggingface token
from huggingface_hub import login
login()

import torch
from transformers import AutoModelForCausalLM, BitsAndBytesConfig, AutoTokenizer

bnb_config = BitsAndBytesConfig(
   load_in_4bit=True,
   bnb_4bit_use_double_quant=True,
   bnb_4bit_quant_type="nf4",
   bnb_4bit_compute_dtype=torch.bfloat16,
)

model_id = "surpassone/Llama-3-SURPASSONE-JP-8B"
            
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
   model_id,
   device_map="auto",
   quantization_config=None, # Use bnb_config, if need to use 4 bit quantization else None
)
if tokenizer.pad_token is None:
   tokenizer.pad_token = tokenizer.eos_token

model.eval()

# for MCQ set generation

alpaca_prompt = """以下は、タスクを説明する指示と、さらに詳しいコンテキストを提供する入力を組み合わせたものです。要求を適切に完了する応答を記述してください。

### 説明書:
{}

### 入力:
{}

### 応答:
{}"""

EOS_TOKEN = "<|endoftext|>"  # Define the EOS token, adjust according to your tokenizer

inputs = tokenizer(
[
   alpaca_prompt.format(
       "次のトピックに関する複数選択問題を生成します。", # instruction
       "介護:体の仕組み", # input
       "", # output - leave this blank for generation!
   )
], return_tensors = "pt").to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 1028)

#for QA  generation

# Define the formatting function and the prompt template
alpaca_prompt = """以下は質問です。質問に適切に答える回答を書いてください。

### 質問:
{}

### 答え:
{}"""

eos_token_id = tokenizer.eos_token_id

inputs = tokenizer(
    [alpaca_prompt.format(
        "介護福祉士はどのような責任を負うべきですか?",  # Question
        ""  # Answer - leave this blank for generation!
    )],
    return_tensors="pt"
).to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 1028)
```

### Developers

Listed in alphabetical order.

- [Leo Uno](https://huggingface.co/leouno12)
- [Mustain Billah](https://huggingface.co/Mustain)
- [Shugo Saito](https://huggingface.co/shugo3110)


### License

[Meta Llama 3 Community License](https://llama.meta.com/llama3/license/)

### How to Cite

```tex
@misc{surpassonellama2024,
      title={surpassone/Llama-3-SURPASSONE-JP-8B},
      url={https://huggingface.co/surpassone/Llama-3-SURPASSONE-JP-8B},
      author={Mustain Billah and Shugo Saito and Leo Uno},
      year={2024},
}
```

### Citations

```tex
@article{llama3modelcard,
    title={Llama 3 Model Card},
    author={AI@Meta},
    year={2024},
    url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
```