Skip to main content
Every agent action falls into one of three buckets:
TierWho decidesExamples
No approvalNobody. The agent just runs.Reading files, searching knowledge, querying a database, summarizing a doc
User approvalThe person asking the agentSending an email, writing to a file, running a shell command, modifying a record
Admin approvalA designated approver, not the userIssuing a refund, granting permissions, deploying to prod, deleting customer data
Most agent actions belong in the first bucket. Putting approval gates on reads slows you down for no reason. The interesting work is getting the second and third tiers right.

User confirmation

The user asking the agent to do something gets to confirm the actual tool call before it fires. This is what Claude Code does when it asks “Can I run this command?” — and what your agent should do for any tool that changes state.
from agno.tools import tool

@tool(requires_confirmation=True)
def restart_service(service_name: str) -> str:
    return ops_client.restart(service_name)
When the agent decides to call restart_service, the run pauses. In Slack, the agent posts an inline confirmation prompt. In the AgentOS UI, the run shows up as Awaiting approval with Approve / Reject buttons. The user clicks. The run resumes. This covers shell commands, sends, writes, API calls with side effects, anything that changes state in the real world. Even reversible actions benefit from “are you sure?” before executing.

Admin approval

Some actions need approval from someone with policy authority, not just the user asking. A user can ask the agent to issue a $10K refund. The user shouldn’t be the one approving it. Admin approval routes the action to a designated approver pool with the right role permissions.
from agno.tools import tool
from agno.approval.decorator import approval

@approval(type="required")
@tool
def issue_refund(customer_id: str, amount: float) -> str:
    return charge_refund(customer_id, amount)

@approval(type="audit")
@tool
def export_customer_data(customer_id: str) -> str:
    return get_customer_data(customer_id)
@approval(type=...)Behavior
"required"Run blocks until a designated approver signs off. The audit trail captures the approver’s identity.
"audit"Run continues. Tool call gets logged to the audit trail asynchronously. Used when policy says “track but don’t gate.”
This is the tier most frameworks don’t have, and the one that matters most for production agents in enterprises. When an admin approves a $10K refund triggered by an agent, the approval, who approved it, when, and the full context of the request all need to be retained for the life of your product. @approval and requires_confirmation compose. A refund tool can require both — the user confirms they want it, the admin signs off on the amount. The audit trail captures every decision along the way.
@approval(type="required")
@tool(requires_confirmation=True)
def issue_refund(customer_id: str, amount: float, reason: str) -> str:
    return charge_refund(customer_id, amount)
See Approvals for routing setup.

Traces vs audit logs

These are different things, and they need different storage:
TracesAudit logs
What they captureEvery step of every run: model calls, tool calls, latency, tokensApproved actions: who approved, when, what context
PurposeDebugging and optimizationAccountability and compliance
Retention30-120 days typicalLifetime of the product
Access patternHigh volume, infrequent readsLow volume, audited reads
Storageagno_traces, agno_spansagno_approvals plus your own audit table
If you treat them as one thing, you’ll either delete audit logs too early (compliance risk) or pay to keep traces forever. AgentOS keeps them in separate tables for this reason. For high-volume deployments, traces can also live in a separate database so audit storage stays cheap and durable. For custom audit shapes (regulatory, internal SOX-style), add a post-hook that writes to your own audit table. See Observability. Not every pause is an approval. Two adjacent patterns let the run wait on a human or another system without the binary approve/reject shape.

User input mid-run

The agent needs more info from the user before it can finish:
@tool(requires_user_input=True)
def file_support_ticket(summary: str) -> str:
    # The agent must collect a description from the user
    # before this tool can complete.
    ...
The agent prompts the user with the structured request. The user responds. The run resumes with the response in scope. For multi-question or branching forms, use UserFeedbackTools and UserControlFlowTools. See the Feedback agent demo.

External execution

The tool routes through a system with its own approval flow: a change-management tool, a Jira workflow, a CI pipeline that requires manual promotion.
@tool(external_execution=True)
def submit_to_change_management(payload: dict) -> str:
    return submit(payload)
The agent doesn’t wait on a Python call to return. It hands off, the run pauses, the external system reports back via the AgentOS API, the run resumes.

Resuming a paused run

The agent returns a RunOutput with active_requirements. Inspect them, decide, continue:
run = agent.run("Refund customer ACME-123 for $500")

for req in run.active_requirements:
    if req.needs_confirmation:
        print(f"Tool: {req.tool_name}, args: {req.tool_args}")
        req.confirm()  # or req.reject(reason="...")

result = agent.continue_run(run_id=run.run_id, requirements=run.requirements)
In the AgentOS UI and Slack, this happens via buttons. The same continue_run endpoint is what those surfaces call.

Automated guardrails

Some checks shouldn’t wait on a human. PII masking, prompt injection detection, content moderation, audit logging — these run inline as hooks.

Pre-hooks (input)

Pre-hooks run before the model sees the user’s message. Use them to mask PII, block prompt injections, or moderate content:
from agno.guardrails import PIIDetectionGuardrail, PromptInjectionGuardrail

agent = Agent(
    model="openai:gpt-5.4",
    pre_hooks=[
        PIIDetectionGuardrail(
            mask_pii=True,
            enable_ssn_check=True,
            enable_credit_card_check=True,
            enable_email_check=True,
            enable_phone_check=True,
        ),
        PromptInjectionGuardrail(),
    ],
)
The user’s input gets sanitized before it reaches the model. The agent never sees the raw input if a guardrail rejects it.

Post-hooks (output)

Post-hooks run after the model produces output. Use them for output guardrails, audit logs, notifications:
from agno.hooks import hook

@hook(run_in_background=True)
def audit_log(run_output, agent):
    log_to_audit_table(
        agent_id=agent.id,
        user_id=run_output.user_id,
        content=run_output.content,
        tools_used=[t.name for t in run_output.tool_calls],
    )

agent = Agent(model=..., post_hooks=[audit_log])
run_in_background=True makes the hook run as a FastAPI background task so the user gets the response without waiting on the audit write.

Worked examples

DemoWhat it shows
HelpdeskUser confirmation, user input, external execution, plus PII + injection guardrails
ApprovalsThe @approval decorator with audit trail
FeedbackUserFeedbackTools and UserControlFlowTools for structured questions

Next

Observability →