| Tier | Who decides | Examples |
|---|---|---|
| No approval | Nobody. The agent just runs. | Reading files, searching knowledge, querying a database, summarizing a doc |
| User approval | The person asking the agent | Sending an email, writing to a file, running a shell command, modifying a record |
| Admin approval | A designated approver, not the user | Issuing a refund, granting permissions, deploying to prod, deleting customer data |
User confirmation
The user asking the agent to do something gets to confirm the actual tool call before it fires. This is what Claude Code does when it asks “Can I run this command?” — and what your agent should do for any tool that changes state.restart_service, the run pauses. In Slack, the agent posts an inline confirmation prompt. In the AgentOS UI, the run shows up as Awaiting approval with Approve / Reject buttons. The user clicks. The run resumes.
This covers shell commands, sends, writes, API calls with side effects, anything that changes state in the real world. Even reversible actions benefit from “are you sure?” before executing.
Admin approval
Some actions need approval from someone with policy authority, not just the user asking. A user can ask the agent to issue a $10K refund. The user shouldn’t be the one approving it. Admin approval routes the action to a designated approver pool with the right role permissions.@approval(type=...) | Behavior |
|---|---|
"required" | Run blocks until a designated approver signs off. The audit trail captures the approver’s identity. |
"audit" | Run continues. Tool call gets logged to the audit trail asynchronously. Used when policy says “track but don’t gate.” |
@approval and requires_confirmation compose. A refund tool can require both — the user confirms they want it, the admin signs off on the amount. The audit trail captures every decision along the way.
Traces vs audit logs
These are different things, and they need different storage:| Traces | Audit logs | |
|---|---|---|
| What they capture | Every step of every run: model calls, tool calls, latency, tokens | Approved actions: who approved, when, what context |
| Purpose | Debugging and optimization | Accountability and compliance |
| Retention | 30-120 days typical | Lifetime of the product |
| Access pattern | High volume, infrequent reads | Low volume, audited reads |
| Storage | agno_traces, agno_spans | agno_approvals plus your own audit table |
Two related primitives
Not every pause is an approval. Two adjacent patterns let the run wait on a human or another system without the binary approve/reject shape.User input mid-run
The agent needs more info from the user before it can finish:UserFeedbackTools and UserControlFlowTools. See the Feedback agent demo.
External execution
The tool routes through a system with its own approval flow: a change-management tool, a Jira workflow, a CI pipeline that requires manual promotion.Resuming a paused run
The agent returns aRunOutput with active_requirements. Inspect them, decide, continue:
continue_run endpoint is what those surfaces call.
Automated guardrails
Some checks shouldn’t wait on a human. PII masking, prompt injection detection, content moderation, audit logging — these run inline as hooks.Pre-hooks (input)
Pre-hooks run before the model sees the user’s message. Use them to mask PII, block prompt injections, or moderate content:Post-hooks (output)
Post-hooks run after the model produces output. Use them for output guardrails, audit logs, notifications:run_in_background=True makes the hook run as a FastAPI background task so the user gets the response without waiting on the audit write.