Concerning the model's output format

#6
by MMuneebUllah - opened

Hi,
Thanks for sharing the model. By default (with outputs[0]["generated_text"]), the model appends the prompt (in the Mistral format) with its generated response. Can we make the model only give its response without appending the prompt (like ChatGPT)?
Much appreciated!

Use the following parameter
return_full_text=False
in the pipeline and it will solve your issue.

Sign up or log in to comment