Skip to main content
Dash starts useful on the synthetic SaaS data and gets sharper as you give it your own. Most of that work is curating knowledge and scheduling proactive runs.

Point Dash at your own data

The synthetic SaaS dataset is a starting point so you can play with Dash before swapping in real data. To use your own:
  1. Replace the data loader. Either rewrite scripts/generate_data.py or pg_restore directly into the public schema.
  2. Rewrite knowledge. Update knowledge/tables/ for your schemas, knowledge/queries/ for proven SQL, knowledge/business/ for your definitions and gotchas.
  3. Reload: docker exec -it dash-api python scripts/load_knowledge.py --recreate.
The Engineer will start building reusable views in the dash schema as it works (dash.monthly_mrr, dash.customer_health_score). The Analyst discovers and prefers those over re-querying raw tables.

Add knowledge layers

Three kinds of knowledge feed Dash. The Dash README walks through each with examples:
LayerWhat it isWhere in repo
Table metadataColumn meanings, value enums, gotchasknowledge/tables/*.json
Query patternsTested SQL the Analyst can adaptknowledge/queries/*.sql
Business rulesMetric definitions, common pitfallsknowledge/business/*.json
After editing, reload:
docker exec -it dash-api python scripts/load_knowledge.py             # upsert
docker exec -it dash-api python scripts/load_knowledge.py --recreate  # fresh start
The fastest way to get Dash performing well is to feed it the queries your team already trusts. Each one reduces the surface area where the model has to invent SQL.

Schedule proactive runs

A useful data agent posts on its own. Morning MRR digest. Alerts when churn drifts. Weekly summary into Slack. See Scheduling for the patterns. The Coda template is the working example to copy: it registers daily digest, issue triage, and repo sync schedules in app/main.py. The same pattern works for Dash.

Run evals

Dash ships with five eval categories. Use them to track quality as you change knowledge or models:
python -m evals                      # all evals
python -m evals --category accuracy  # one category
python -m evals --verbose            # show full responses
CategoryTests
accuracyCorrect data and meaningful insights
routingTeam routes to the right agent and tools
securityNo credential or secret leaks
governanceRefuses destructive SQL operations
boundariesSchema access boundaries respected
Add your own cases as you discover them. Run evals before each deploy to catch regressions.

Going deeper

To learnSee
The team architecturedash/team.py and dash/agents/
The inspirationOpenAI’s in-house data agent
Knowledge in Agno generallyKnowledge
Comparable templatesScout, Coda
Building a fully custom AgentOS appBuild a Product