Knowledge
Agents use knowledge to supplement their training data with domain expertise.
Knowledge is stored in a vector database and provides agents with business context at query time, helping them respond in a context-aware manner. The general syntax is:
You can give your agent access to your knowledge base in the following ways:
- You can set
search_knowledge=True
to provide asearch_knowledge_base()
tool to your agent. This is automatically added if you provide a knowledgebase. - You can set
add_references=True
to automatically add references from the knowledge base. Optionally pass your ownretriever
callable with the following signature:
- You can set
update_knowledge=True
to provide aadd_to_knowledge()
tool to your agent allowing it to update the knowledgebase.
Vector Databases
While any type of storage can act as a knowledge base, vector databases offer the best solution for retrieving relevant results from dense information quickly. Here’s how vector databases are used with Agents:
Chunk the information
Break down the knowledge into smaller chunks to ensure our search query returns only relevant results.
Load the knowledge base
Convert the chunks into embedding vectors and store them in a vector database.
Search the knowledge base
When the user sends a message, we convert the input message into an embedding and “search” for nearest neighbors in the vector database.
Example: RAG Agent with a PDF Knowledge Base
Let’s build a RAG Agent that answers questions from a PDF.
Step 1: Run PgVector
Let’s use PgVector
as our vector db as it can also provide storage for our Agents.
Install docker desktop and run PgVector on port 5532 using:
Step 2: Traditional RAG
Retrieval Augmented Generation (RAG) means “stuffing the prompt with relevant information” to improve the model’s response. This is a 2 step process:
- Retrieve relevant information from the knowledge base.
- Augment the prompt to provide context to the model.
Let’s build a traditional RAG Agent that answers questions from a PDF of recipes.
Install libraries
Install the required libraries using pip
Create a Traditional RAG Agent
Create a file traditional_rag.py
with the following contents
Run the agent
Run the agent (it takes a few seconds to load the knowledge base).
Step 3: Agentic RAG
With traditional RAG above, add_references=True
always adds information from the knowledge base to the prompt, regardless of whether it is relevant to the question or helpful.
With Agentic RAG, we let the Agent decide if it needs to access the knowledge base and what search parameters it needs to query the knowledge base.
Set search_knowledge=True
and read_chat_history=True
, giving the Agent tools to search its knowledge and chat history on demand.
Create an Agentic RAG Agent
Create a file agentic_rag.py
with the following contents
Run the agent
Run the agent
Notice how it searches the knowledge base and chat history when needed
Attributes
Parameter | Type | Default | Description |
---|---|---|---|
knowledge | AgentKnowledge | None | Provides the knowledge base used by the agent. |
search_knowledge | bool | True | Adds a tool that allows the Model to search the knowledge base (aka Agentic RAG). Enabled by default when knowledge is provided. |
add_references | bool | False | Enable RAG by adding references from AgentKnowledge to the user prompt. |
retriever | Callable[..., Optional[list[dict]]] | None | Function to get context to add to the user message. This function is called when add_references is True. |
context_format | Literal['json', 'yaml'] | json | Specifies the format for RAG, either “json” or “yaml”. |
add_context_instructions | bool | False | If True, add instructions for using the context to the system prompt (if knowledge is also provided). For example: add an instruction to prefer information from the knowledge base over its training data. |
Developer Resources
- View Cookbook