Skip to main content
Every company with 30+ people should have an internal data agent. Most AI-forward companies are already building one in-house: Clone Dash and you have the same thing, in your cloud, in an afternoon.

The system

Dash is multi-agent system with hard-enforced boundaries:
MemberSchema accessTools
Analystpublic (read-only)SQLTools(read_only=True), introspect_schema, save_validated_query, ReasoningTools
Engineerpublic (read), dash (read+write)SQLTools (full), introspect_schema, update_knowledge, ReasoningTools
Leadernone directlyRoutes the request, optional SlackTools for posting back
These boundaries are enforced by the database engine itself. The Analyst’s connection physically cannot write. The Engineer’s writes physically cannot touch public. The boundary holds even if the model goes off-script.

Six layers of context

Every Dash query is grounded in six layers:
LayerSource
Validated queriesknowledge/queries/*.sql
Business rulesknowledge/business/*.json
Table metadataknowledge/tables/*.json
Institutional knowledgeMCP (optional)
LearningsDash’s LearningMachine
Runtime contextintrospect_schema tool
The first four are curated and stored in pgvector. Learnings are captured automatically as Dash works. Runtime context is fetched live.

How Dash works

Every question runs through the same loop:
  1. Retrieve. Dash pulls the matching knowledge layers and any prior learnings.
  2. Generate. The Analyst writes SQL grounded in what came back, then runs it read-only against public.
  3. Answer. Dash composes a response with the numbers and a citation to the SQL it ran.
  4. Learn. Errors get diagnosed and the fix is saved as a learning so the same error can’t recur.
  5. Materialize. When a question repeats, the Leader asks the Engineer to build a view in the dash schema. The next ask hits the view directly.
After a few weeks of real use, the same errors stop happening, the right shapes sit in the knowledge stores, and the dash schema fills with views your team uses without writing a migration.

Next

Setup takes about five minutes and starts you on a synthetic SaaS dataset (~900 customers, two years of data) so you have something concrete to ask. Setup →