LLM Prompt

With the results from the search at hand, it is now possible to generate a prompt to send to the LLM. The prompt must include the original question sent by the user, the relevant passages that were obtained in the retrieval phase, and instructions to the LLM stating that the answer should come from the included passages.

To render the prompt, the application uses Flask's render_template() function:

    qa_prompt = render_template('rag_prompt.txt', question=question, docs=docs)

The template file that is referenced in this call is in api/templates/rag_prompt.txt.

Use the following passages to answer the user's question. 
Each passage has a NAME which is the title of the document.
When answering, give the source name of the passages you are answering from at the end.
Put them in a comma separated list, prefixed with SOURCES:.

Example:

Question: What is the meaning of life?
Response:
The meaning of life is 42.

SOURCES: Hitchhiker's Guide to the Galaxy

If you don't know the answer, just say that you don't know, don't try to make up an answer.

----

{% for doc in docs -%}
---
NAME: {{ doc.metadata.name }}
PASSAGE:
{{ doc.page_content }}
---

{% endfor -%}
----
Question: {{ question }}
Response:

You can make changes to this template if you'd like to see their effect in the quality of the chatbot responses. But always make sure to preserve the for-loop that renders the retrieved passages.

Share this article