Suppress the prompt from appearing in the generated response

#27
by InformaticsSolutions - opened

Example code:

model_name = 'teknium/OpenHermes-2.5-Mistral-7B'
messages = [
    {"role": "user", "content": "What is your favourite condiment?"},
    {"role": "assistant", "content": "Well, I’m quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I’m   cooking up in the kitchen!"},
    {"role": "user", "content": "Do you have mayonnaise recipes?"}
]
tokenizer = LlamaTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=True)
encoded = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
model_inputs = encoded.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=250, do_sample=False, eos_token_id=tokenizer.eos_token_id, pad_token_id = tokenizer.pad_token_id)
decoded = tokenizer.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_space=True)
print(decoded[0])

Response:

<|im_start|> user
What is your favourite condiment? 
 <|im_start|> assistant
Well, I’m quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I’m cooking up in the kitchen! 
 <|im_start|> user
Do you have mayonnaise recipes? 
 <|im_start|> assistant
Of course! Here are two simple recipes for homemade mayonnaise that you can try out. [...]

Is there a way to prevent or suppress the prompt from appearing in the response? Thank you.

InformaticsSolutions changed discussion title from Exclude prompt from generated response to Exclude the prompt from appearing in the generated response
InformaticsSolutions changed discussion title from Exclude the prompt from appearing in the generated response to Suppress the prompt from appearing in the generated response
decoded[0][len(model_inputs.input_ids):]

Sign up or log in to comment