File size: 2,860 Bytes
47d4eb0
 
7e11e79
3c95565
 
 
 
47d4eb0
 
 
 
 
 
1fd244f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47d4eb0
3c95565
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
library_name: transformers
license: apache-2.0
datasets:
- skumar9/orpo-mmlu
tags:
- medical
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

This is llama3 8b family chat model finetuned from base [`epfl-llm/meditron-7b`](https://huggingface.co/epfl-llm/meditron-7b) with [open assist dataset](https://huggingface.co/datasets/mlabonne/guanaco-llama2) using SFT [QLora](https://arxiv.org/abs/2305.14314) .<br>
All the linear parameters were made trainable with a rank of 16.<br>

# Prompt template: Llama

```
'<s> [INST] <<SYS>>
You are a helpful, respectful and medical honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>> {question} [/INST] {Model answer } </s>'
```

# Usage:
```python
model_name='jiviadmin/meditron-7b-guanaco-chat'

# Load the model
base_model = AutoModelForCausalLM.from_pretrained(
model_name,
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map={"": 0},
)
# Load tokenizer to save it
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True,add_eos_token=True)
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
tokenizer.pad_token_id = 18610
tokenizer.padding_side = "right"

default_system_prompt="You are a helpful, respectful and honest medical assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.Please consider the context below if applicable:
Context:NA"

#Initialize the hugging face pipeline
def format_prompt(question):
  return f'''<s> [INST] <<SYS>> {default_system_prompt} <</SYS>> [INST] {question} [/INST]'''

question=' My father has a big white colour patch inside of his right cheek. please suggest a reason.'

pipe = pipeline(task="text-generation", model=base_model, tokenizer=tokenizer, max_length=512,repetition_penalty=1.1,return_full_text=False)
result = pipe(format_prompt(question))
answer=result[0]['generated_text']
print(answer)
```

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->