Skip to main content
At its core, Agno’s Knowledge system is Retrieval Augmented Generation (RAG) made simple. Instead of cramming everything into a prompt, you store information in a searchable knowledge base and let agents pull exactly what they need, when they need it.

The Knowledge Pipeline: Three Simple Steps

1

Store: Break Down and Index Information

Your documents, files, and data are processed by specialized readers, broken into chunks using configurable strategies, and stored in a vector database with their meanings captured as embeddings.Example: A 50-page employee handbook is processed by Agno’s PDFReader, chunked using SemanticChunking strategy, and becomes 200 searchable chunks with topics like “vacation policy,” “remote work guidelines,” or “expense procedures.”
2

Search: Find Relevant Information

When a user asks a question, the agent automatically searches the knowledge base using Agno’s search methods to find the most relevant information chunks.Example: User asks “How many vacation days do I get?” → Agent calls knowledge.search() and finds chunks about vacation policies, PTO accrual, and holiday schedules.
3

Generate: Create Contextual Responses

The agent combines the retrieved information with the user’s question to generate an accurate, contextual response, with sources tracked through Agno’s content management system.Example: “Based on your employee handbook, full-time employees receive 15 vacation days per year, accrued monthly at 1.25 days per month…”
Think of embeddings as a way to capture meaning in numbers. When you ask “What’s our refund policy?”, the system doesn’t just match the word “refund”—it understands you’re asking about returns, money back, and customer satisfaction. That’s because text gets converted into vectors (lists of numbers) where similar meanings cluster together. “Refund policy” and “return procedures” end up close in vector space, even though they don’t share exact words. This is what enables semantic search beyond simple keyword matching.

Setting Up Knowledge in Code

Here’s how you connect the pieces to build a knowledge-powered agent:
from agno.knowledge.knowledge import Knowledge
from agno.vectordb.pgvector import PgVector
from agno.knowledge.embedder.openai import OpenAIEmbedder
from agno.knowledge.chunking.semantic import SemanticChunking
from agno.knowledge.reader.pdf_reader import PDFReader
from agno.agent import Agent

# 1. Configure vector database with embedder
vector_db = PgVector(
    table_name="company_knowledge",
    db_url="postgresql+psycopg://user:pass@localhost:5432/db",
    embedder=OpenAIEmbedder(id="text-embedding-3-small")  # Optional: defaults to OpenAIEmbedder
)

# 2. Create knowledge base
knowledge = Knowledge(
    name="Company Documentation",
    vector_db=vector_db,
    max_results=10
)

# 3. Add content with chunking strategy
knowledge.add_content(
    path="company_docs/employee_handbook.pdf",
    reader=PDFReader(
        chunking_strategy=SemanticChunking(  # Optional: defaults to FixedSizeChunking
            chunk_size=1000,
            similarity_threshold=0.5
        )
    ),
    metadata={"type": "policy", "department": "hr"}
)

# 4. Create agent with knowledge search enabled
agent = Agent(
    knowledge=knowledge,
    search_knowledge=True,  # Required for automatic search
    knowledge_filters={"type": "policy"}  # Optional filtering
)
Smart Defaults: Agno provides sensible defaults to get you started quickly:
  • Embedder: If no embedder is specified, Agno automatically uses OpenAIEmbedder with default settings
  • Chunking: If no chunking strategy is provided to readers, Agno defaults to FixedSizeChunking(chunk_size=5000)
  • Search Type: Vector databases default to SearchType.vector for semantic search
This means you can start with minimal configuration and customize as needed!

What Happens When You Add Content

When you call knowledge.add_content(), here’s what happens:
  1. A reader parses your file - Agno picks the right reader (PDFReader, CSVReader, WebsiteReader, etc.) based on your file type and extracts the text
  2. Content gets chunked - Your chosen chunking strategy breaks the text into digestible pieces, whether by semantic boundaries, fixed sizes, or document structure
  3. Embeddings are created - Each chunk is converted into a vector embedding using your embedder (OpenAI, SentenceTransformer, etc.)
  4. Status is tracked - Content moves through states: PROCESSING → COMPLETED or FAILED
  5. Everything is stored - Chunks, embeddings, and metadata all land in your vector database, ready for search

What Happens During a Conversation

When your agent receives a question:
  1. The agent decides - Should I search for more context or answer from what I already know?
  2. Query gets embedded - If searching, your question becomes a vector using the same embedder
  3. Similar chunks are found - knowledge.search() or knowledge.async_search() finds chunks with vectors close to your question
  4. Filters are applied - Any metadata filters you configured narrow down the results
  5. Agent synthesizes the answer - Retrieved context + your question = accurate, grounded response

Key Components Working Together

  • Readers - Agno’s reader factory provides specialized parsers: PDFReader, CSVReader, WebsiteReader, MarkdownReader, and more for different content types.
  • Chunking Strategies - Choose from FixedSizeChunking, SemanticChunking, or RecursiveChunking to optimize how documents are broken down for search.
  • Embedders - Support for OpenAIEmbedder, SentenceTransformerEmbedder, and other embedding models to convert text into searchable vectors.
  • Vector Databases - PgVector for production, LanceDB for development, or PineconeDB for managed services - each with hybrid search capabilities.

Choosing Your Chunking Strategy

How you split content dramatically affects search quality. Agno gives you several strategies to match your content type:
  • Fixed Size - Splits at consistent character counts. Fast and predictable, great for uniform content
  • Semantic - Uses embeddings to find natural topic boundaries. Best for complex docs where meaning matters
  • Recursive - Respects document structure (paragraphs, sections). Good balance of speed and context
  • Document - Preserves natural document divisions. Perfect for well-structured content
  • CSV Row - Treats each row as a unit. Essential for tabular data
  • Markdown - Honors heading hierarchy. Ideal for documentation
Learn more about choosing the right chunking strategy for your use case.

Managing Your Knowledge Base

Once content is loaded, you’ll want to check status, search, and manage what’s there:
# Check what's been processed and its status
content_list, total_count = knowledge.get_content()
for content in content_list:
    status, message = knowledge.get_content_status(content.id)
    print(f"{content.name}: {status}")

# Search with metadata filters for more precise results
results = knowledge.search(
    query="vacation policy",
    max_results=5,
    filters={"department": "hr", "type": "policy"}
)

# Validate your filters before searching (catches typos!)
valid_filters, invalid_keys = knowledge.validate_filters({
    "department": "hr",
    "invalid_key": "value"  # This will be flagged as invalid
})
Use knowledge.get_content_status() to debug when content doesn’t appear in search results. It’ll tell you if processing failed or is still in progress.
Agno gives you two ways to use knowledge with agents: Agentic Search (search_knowledge=True): The agent automatically decides when to search and what to look for. This is the recommended approach for most use cases - it’s smarter and more dynamic. Traditional RAG (add_knowledge_to_context=True): Relevant knowledge is always added to the agent’s context. Simpler but less flexible. Use this when you want predictable, consistent behavior.
# Agentic approach (recommended)
agent = Agent(
    knowledge=knowledge,
    search_knowledge=True  # Agent decides when to search
)

# Traditional RAG approach
agent = Agent(
    knowledge=knowledge,
    add_knowledge_to_context=True  # Always includes knowledge
)

Ready to Build?

Now that you understand how Knowledge works in Agno, here’s where to go next:
I