Skip to main content
You can use hooks on agents and teams to do work before or after the main execution of the run. Use cases for hooks include:
  • Security guardrails (e.g. PII detection, prompt injection defense)
  • Input validation
  • Output validation
  • Data preprocessing (e.g. normalizing input data)
  • Data postprocessing (e.g. adding additional context to the output)
  • Logging (e.g. logging the duration of the run)
  • Debugging (e.g. debugging the run)

When Hooks Are Triggered

Hooks execute at specific points in the Agent/Team run lifecycle:
  • Pre-hooks: Execute immediately after the current session is loaded, before any processing begins. They run before the model context is prepared and before any LLM execution begins, i.e. any modifications to the input, session state, or dependencies will be applied before LLM execution.
  • Post-hooks: Execute after the Agent/Team generates a response and the output is prepared, but before the response is returned to the user. In streaming responses, they run after each chunk of the response is generated.

Pre-hooks

Pre-hooks execute at the very beginning of your Agent run, giving you complete control over what reaches the LLM. They’re perfect for implementing input validation, security checks, or any data preprocessing against the input your Agent receives.

Common Use Cases

Security Guardrails
  • Detect and prevent PII (Personally Identifiable Information) from reaching the LLM.
  • Defend against prompt injection and jailbreak attempts.
  • Filter NSFW or inappropriate content.
  • See the Guardrails documentation for more details.
Input Validation
  • Validate format, length, content or any other property of the input.
  • Remove or mask sensitive information.
  • Normalize input data.
Data Preprocessing
  • Transform input format or structure.
  • Enrich input with additional context.
  • Apply any other business logic before sending the input to the LLM.

Basic Example

Let’s create a simple pre-hook that validates the input length and raises an error if it’s too long:
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.exceptions import CheckTrigger, InputCheckError
from agno.run.agent import RunInput

# Simple function we will use as a pre-hook
def validate_input_length(
    run_input: RunInput,
) -> None:
    """Pre-hook to validate input length."""
    max_length = 1000
    if len(run_input.input_content) > max_length:
        raise InputCheckError(
            f"Input too long. Max {max_length} characters allowed",
            check_trigger=CheckTrigger.INPUT_NOT_ALLOWED,
        )

agent = Agent(
    name="My Agent",
    model=OpenAIChat(id="gpt-4o"),
    # Provide the pre-hook to the Agent using the pre_hooks parameter
    pre_hooks=[validate_input_length],
)
You can see complete examples of pre-hooks in the Examples section.

Pre-hook Parameters

Pre-hooks run automatically during the Agent run and receive the following parameters:
  • run_input: The input to the Agent run that can be validated or modified
  • agent: Reference to the Agent instance
  • session: The current agent session
  • run_context: The current run context. See the Run Context reference.
  • debug_mode: Whether debug mode is enabled (optional)
The framework automatically injects only the parameters your hook function accepts, so you can define hooks with just the parameters you need. You can learn more about the parameters in the Pre-hooks reference.

Post-hooks

Post-hooks execute after your Agent generates a response, allowing you to validate, transform, or enrich the output before it reaches the user. They’re perfect for output filtering, compliance checks, response enrichment, or any other output transformation you need.

Common Use Cases

Output Validation
  • Validate response format, length, and content quality.
  • Remove sensitive or inappropriate information from responses.
  • Ensure compliance with business rules and regulations.
Output Transformation
  • Add metadata or additional context to responses.
  • Transform output format for different clients or use cases.
  • Enrich responses with additional data or formatting.

Basic Example

Let’s create a simple post-hook that validates the output length and raises an error if it’s too long:
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.exceptions import CheckTrigger, OutputCheckError
from agno.run.agent import RunOutput

# Simple function we will use as a post-hook
def validate_output_length(
    run_output: RunOutput,
) -> None:
    """Post-hook to validate output length."""
    max_length = 1000
    if len(run_output.content) > max_length:
        raise OutputCheckError(
            f"Output too long. Max {max_length} characters allowed",
            check_trigger=CheckTrigger.OUTPUT_NOT_ALLOWED,
        )

agent = Agent(
    name="My Agent",
    model=OpenAIChat(id="gpt-5-mini"),
    # Provide the post-hook to the Agent using the post_hooks parameter
    post_hooks=[validate_output_length],
)
You can see complete examples of post-hooks in the Examples section.

Post-hook Parameters

Post-hooks run automatically during the Agent run and receive the following parameters:
  • run_output: The output from the Agent run that can be validated or modified
  • agent: Reference to the Agent instance
  • session: The current agent session
  • run_context: The current run context. See the Run Context reference.
  • user_id: The user ID for the run (optional)
  • debug_mode: Whether debug mode is enabled (optional)
The framework automatically injects only the parameters your hook function accepts, so you can define hooks with just the parameters you need. You can learn more about the parameters in the Post-hooks reference.

Guardrails

A popular use case for hooks are Guardrails: built-in safeguards for your Agents. You can learn more about them in the Guardrails section.

Developer Resources