Point Dash at your own data
The synthetic SaaS dataset is a starting point so you can play with Dash before swapping in real data. To use your own:- Replace the data loader. Either rewrite
scripts/generate_data.pyorpg_restoredirectly into thepublicschema. - Rewrite knowledge. Update
knowledge/tables/for your schemas,knowledge/queries/for proven SQL,knowledge/business/for your definitions and gotchas. - Reload:
docker exec -it dash-api python scripts/load_knowledge.py --recreate.
dash schema as it works (dash.monthly_mrr, dash.customer_health_score). The Analyst discovers and prefers those over re-querying raw tables.
Add knowledge layers
Three kinds of knowledge feed Dash. The Dash README walks through each with examples:| Layer | What it is | Where in repo |
|---|---|---|
| Table metadata | Column meanings, value enums, gotchas | knowledge/tables/*.json |
| Query patterns | Tested SQL the Analyst can adapt | knowledge/queries/*.sql |
| Business rules | Metric definitions, common pitfalls | knowledge/business/*.json |
Schedule proactive runs
A useful data agent posts on its own. Morning MRR digest. Alerts when churn drifts. Weekly summary into Slack. See Scheduling for the patterns. The Coda template is the working example to copy: it registers daily digest, issue triage, and repo sync schedules inapp/main.py. The same pattern works for Dash.
Run evals
Dash ships with five eval categories. Use them to track quality as you change knowledge or models:| Category | Tests |
|---|---|
accuracy | Correct data and meaningful insights |
routing | Team routes to the right agent and tools |
security | No credential or secret leaks |
governance | Refuses destructive SQL operations |
boundaries | Schema access boundaries respected |
Going deeper
| To learn | See |
|---|---|
| The team architecture | dash/team.py and dash/agents/ |
| The inspiration | OpenAI’s in-house data agent |
| Knowledge in Agno generally | Knowledge |
| Comparable templates | Scout, Coda |
| Building a fully custom AgentOS app | Build a Product |