| Agent | What it retrieves | Why retrieval |
|---|---|---|
| Pal | Wiki articles, meeting notes, voice guides, raw research dumps | The user’s personal knowledge accumulates over time. Search beats stuffing it all in context. |
| Dash | Table descriptions, validated SQL queries, business rules | The Analyst grounds every query in known patterns. Hallucinated SQL is the failure mode RAG prevents. |
Pal: knowledge that compounds
Pal stores everything the user feeds it (URLs ingested, articles compiled, meetings) in a vector-indexed wiki. The Navigator agent searches it before answering:search_type="hybrid" runs both vector similarity and BM25 keyword search, then merges results. Catches both “what’s the same idea worded differently?” (semantic) and “find the doc that mentions this exact term” (keyword).
Dash: SQL grounded in known patterns
Dash holds three kinds of knowledge under oneKnowledge instance:
| Knowledge | Purpose |
|---|---|
| Table metadata | Column meanings, value enums, gotchas |
| Validated query patterns | Tested SQL the Analyst can adapt |
| Business rules | ”MRR excludes trials”, “active means ended_at IS NULL” |
Loading knowledge
Knowledge content gets loaded once at boot or via a script:--recreate rebuilds from scratch. Without the flag, content gets upserted by primary key.
How retrieval gets injected
Whenadd_knowledge_to_context=True, AgentOS:
- Takes the user’s message.
- Runs hybrid search against the
Knowledgevector DB. - Pulls top-k chunks with metadata (configurable).
- Injects them into the system prompt under a “Relevant context” section.
- The model answers with both the message and the retrieved context.
search_knowledge=True is also set, the agent gets a search_knowledge_base(query) tool and can run additional searches mid-run when it needs to.
See it in action
| Try in chat | What happens |
|---|---|
| Pal: “what do I know about RAG techniques?” | Navigator searches the wiki, returns curated answer with citations. |
| Pal: “ingest https://example.com/article” | Researcher fetches, saves to raw/, Compiler eventually folds into wiki. |
| Dash: “what’s our MRR trend?” | Analyst retrieves MRR definition + matching query, writes grounded SQL. |
| Dash: “why is churn so high?” | Analyst retrieves churn definitions, finds matching query patterns, executes, interprets. |
When hybrid search isn’t enough
For very large knowledge bases, add reranking. PgVector + a reranker (Cohere, BGE) tightens the top-k:agents/pal/, agents/dash/