Can several different prompts be handled together?

#77
by WENJINLIU - opened

chat1=[{'role':'user','content':'write a story'}]
chat2=[{'role':'user','content':'write anothher story'}]

Can I combine them into a list of chats so model can handle them together?

Google org

This should be possible - apply the chat template from the tokenizer to each of the chats separately, then send the tokenized chats as a batch to the model

Thank you. But I am a new learner. Could you give me a favor to tell me in detail about it? Thank you!

input1 = tokenizer.encode(prompt1, add_special_tokens=False, return_tensors="pt")

input2 = tokenizer.encode(prompt2, add_special_tokens=False, return_tensors="pt")

then since input1 and input2 are not in the same length, shall I pad 0 to the shorter one so that they share the same length. Then I concatenate them and send to model to generate answer?

This should be possible - apply the chat template from the tokenizer to each of the chats separately, then send the tokenized chats as a batch to the model

Google org

@WENJINLIU Or you can try processing multiple prompts using a pipeline. You can directly pass the list of prompts to generate responses for each:

from transformers import pipeline
pipe = pipeline("text-generation", model="google/gemma-7b-it", max_length=256)
prompts = [
    [{"role": "user", "content": "write a story"}],
    [{"role": "user", "content": "write another story"}]
]
outputs = pipe(prompts)
outputs

Screenshot 2024-08-12 at 7.44.42 PM.png

Sign up or log in to comment