How to get a different responce from the model using the same input

#59
by mans-0987 - opened

I am trying to get several versions of output from the system for the same input and I am always getting the same output.

It seems that if I initiate the model and tokenizer and then try to run the pipeline three times with the same input, it is always generating the same output. How can I change the model seed so it generates a different output for the same input?

Greedy decoding is used by default, you can change the decoding parameters

Can you please elaborate with some sample code?

Change the parameters of model.generate: set num_beams greater than 1, and set do_sample=True

Google org

@mans-0987 did this work for you?

Hi @mans-0987 , To generate different outputs for the same input , you need to adjust the random seed that the model uses during the decoding process. In most of the Hugging face models , need to control the randomness of the output through the seed parameter, Please try to use parameters like do_sample=True and adjust settings like temperature or top_op to get diverse outputs. Kindly try and let us know if you have any concerns. Thank you.

Sign up or log in to comment