Skip to main content
Guardrails are built-in safeguards for your Agents and Teams. You can use them to make sure the input you send to the LLM is safe and doesn’t contain anything undesired. Some of the most popular usages are:
  • PII detection and redaction
  • Prompt injection defense
  • Jailbreak defense
  • Data leakage prevention
  • NSFW content filtering

Agno included Guardrails

Agno provides some built-in guardrails you can use out of the box with your Agents and Teams: To use the Agno included guardrails, you just need to import them and pass them to the Agent or Team with the pre_hooks parameter. Guardrails are implemented as pre-hooks, which execute before your Agent processes input. For example, to use the PII Detection Guardrail:
from agno.guardrails import PIIDetectionGuardrail
from agno.agent import Agent
from agno.models.openai import OpenAIChat

agent = Agent(
    name="Privacy-Protected Agent",
    model=OpenAIChat(id="gpt-5-mini"),
    pre_hooks=[PIIDetectionGuardrail()],
)
You can see complete examples using the Agno Guardrails in the Usage section.

Custom Guardrails

You can create custom guardrails by extending the BaseGuardrail class. See the BaseGuardrail Reference for more details. This is useful if you need to perform any check or transformation not handled by the built-in guardrails, or just to implement your own validation logic. You will need to implement the check and async_check methods to perform your validation and raise exceptions when detecting undesired content.
Agno automatically uses the sync or async version of the guardrail based on whether you are running the agent with .run() or .arun().
For example, let’s create a simple custom guardrail that checks if the input contains any URLs:
import re

from agno.exceptions import CheckTrigger, InputCheckError
from agno.guardrails import BaseGuardrail
from agno.run.agent import RunInput


class URLGuardrail(BaseGuardrail):
    """Guardrail to identify and stop inputs containing URLs."""

    def check(self, run_input: RunInput) -> None:
        """Raise InputCheckError if the input contains any URLs."""
        if isinstance(run_input.input_content, str):
            # Basic URL pattern
            url_pattern = r'https?://[^\s]+|www\.[^\s]+|[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}[^\s]*'
            if re.search(url_pattern, run_input.input_content):
                raise InputCheckError(
                    "The input seems to contain URLs, which are not allowed.",
                    check_trigger=CheckTrigger.INPUT_NOT_ALLOWED,
                )

    async def async_check(self, run_input: RunInput) -> None:
        """Raise InputCheckError if the input contains any URLs."""
        if isinstance(run_input.input_content, str):
            # Basic URL pattern
            url_pattern = r'https?://[^\s]+|www\.[^\s]+|[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}[^\s]*'
            if re.search(url_pattern, run_input.input_content):
                raise InputCheckError(
                    "The input seems to contain URLs, which are not allowed.",
                    check_trigger=CheckTrigger.INPUT_NOT_ALLOWED,
                )
Now you can use your custom guardrail in your Agent:
from agno.agent import Agent
from agno.models.openai import OpenAIChat

# Agent using our URLGuardrail
agent = Agent(
    name="URL-Protected Agent",
    model=OpenAIChat(id="gpt-5-mini"),
    # Provide the Guardrails to be used with the pre_hooks parameter
    pre_hooks=[URLGuardrail()],
)

# This will raise an InputCheckError
agent.run("Can you check what's in https://fake.com?")

Learn More

Developer Resources