Changing the prompt for the model

#8
by MurtazaNasir - opened

Changing the prompt doesn't seem to have any effect. If I ask it to provide a concise single sentence caption, it still gives me a full detailed caption. It seems the VLM_PROMPT line has no effect. Is there a way to change the output parameters for the model?

Probably not, the base LLama 3.1 model is the non-instruct version. The original author would need to use the instruct version to follow the prompt more concisely, or at least need to train another adapter which has both “short”, and “long” prompt version using trigger word

I don’t know which one the original author will use, but it is currently in development

I switched to the instruct model and it's still not following prompt

Sign up or log in to comment