Restricting prior internal knowledge for RAG

#110
by jasonisaac - opened

Hello Everyone,

I am building RAG application with this model Mistral-7B-Instruct-v0.2. It works well for questions related to the knowledge content. But I want to restrict the LLM from answering to any out of scope questions. This is the current prompt I am using:

You are an assistant to users who have queries about {topic} .
Only USE the following pieces of contexts under Contexts to answer. Do not use any external knowledge or information. If the answer cannot be determined from the context, respond with "I don't have enough information to answer that"

Contexts:

{contexts}

What is the effective way to stop LLM from answering questions if it is out of scope?.
Can we prompt like above and restrict or is there a better way?

Thanks in Advance

Working on something similar. No matter what the prompt is, it's providing answers from out of context. Have you found solution?

Much needed to take anything to production.

Sign up or log in to comment