RAG
Traditional RAG with LanceDB
Examples
- Introduction
- Getting Started
- Agents
- Workflows
- Applications
Agent Concepts
- Multimodal
- RAG
- Knowledge
- Memory
- Teams
- Async
- Hybrid Search
- Storage
- Tools
- Vector Databases
- Embedders
Models
- Anthropic
- AWS Bedrock Claude
- Azure OpenAI
- Cohere
- DeepSeek
- Fireworks
- Gemini
- Groq
- Hugging Face
- Mistral
- NVIDIA
- Ollama
- OpenAI
- Together
- Vertex AI
- xAI
RAG
Traditional RAG with LanceDB
Code
from agno.agent import Agent
from agno.embedder.openai import OpenAIEmbedder
from agno.knowledge.pdf_url import PDFUrlKnowledgeBase
from agno.models.openai import OpenAIChat
from agno.vectordb.lancedb import LanceDb, SearchType
# Create a knowledge base of PDFs from URLs
knowledge_base = PDFUrlKnowledgeBase(
urls=["https://agno-public.s3.amazonaws.com/recipes/ThaiRecipes.pdf"],
# Use LanceDB as the vector database and store embeddings in the `recipes` table
vector_db=LanceDb(
table_name="recipes",
uri="tmp/lancedb",
search_type=SearchType.vector,
embedder=OpenAIEmbedder(id="text-embedding-3-small"),
),
)
# Load the knowledge base: Comment after first run as the knowledge base is already loaded
knowledge_base.load()
agent = Agent(
model=OpenAIChat(id="gpt-4o"),
knowledge=knowledge_base,
# Enable RAG by adding references from AgentKnowledge to the user prompt.
add_references=True,
# Set as False because Agents default to `search_knowledge=True`
search_knowledge=False,
show_tool_calls=True,
markdown=True,
)
agent.print_response(
"How do I make chicken and galangal in coconut milk soup", stream=True
)
Usage
1
Create a virtual environment
Open the Terminal
and create a python virtual environment.
2
Set your API key
export OPENAI_API_KEY=xxx
3
Install libraries
pip install -U openai lancedb tantivy pypdf sqlalchemy agno
4
Run Agent