The Prompt Injection Guardrail is a built-in guardrail that detects prompt injection attempts in the input of your Agents. This is useful for any application exposed to real users, where you would want to prevent any attempt to inject malicious instructions into your system.

Basic Usage

To provide your Agent with the Prompt Injection Guardrail, you need to import it and pass it to the Agent using the pre_hooks parameter:
from agno.guardrails import PromptInjectionGuardrail
from agno.agent import Agent
from agno.models.openai import OpenAIChat

prompt_injection_guardrail = PromptInjectionGuardrail()

agent = Agent(
    name="Prompt Injection Guardrail Agent",
    model=OpenAIChat(id="gpt-5-mini"),
    pre_hooks=[prompt_injection_guardrail],
)

Injection patterns

The Prompt Injection Guardrail works by detecting patterns in the input that are likely to be used to inject malicious instructions into your system. The default list of injection patterns handled by the guardrail are:
  • “ignore previous instructions”
  • “ignore your instructions”
  • “you are now a”
  • “forget everything above”
  • “developer mode”
  • “override safety”
  • “disregard guidelines”
  • “system prompt”
  • “jailbreak”
  • “act as if”
  • “pretend you are”
  • “roleplay as”
  • “simulate being”
  • “bypass restrictions”
  • “ignore safeguards”
  • “admin override”
  • “root access”
You can override the default list of injection patterns by providing your own own custom list of injection patterns:
prompt_injection_guardrail = PromptInjectionGuardrail(
    injection_patterns=["ignore previous instructions", "ignore your instructions"],
)

Developer Resources