Skip to main content
This example demonstrates how to run specific hooks as background tasks using the @hook decorator, while other hooks run synchronously.
1

Create a Python file

touch background_hooks_decorator.py
2

Add the following code to your Python file

background_hooks_decorator.py
import asyncio

from agno.agent import Agent
from agno.db.sqlite import SqliteDb
from agno.hooks import hook
from agno.models.openai import OpenAIChat
from agno.os import AgentOS
from agno.run.agent import RunInput


@hook(run_in_background=True)
def log_request(run_input: RunInput, agent):
    """
    This pre-hook runs in the background.
    Note: Pre-hooks in background mode cannot modify run_input.
    """
    print(f"[Background Pre-Hook] Request received for agent: {agent.name}")
    print(f"[Background Pre-Hook] Input: {run_input.input_content}")


async def log_analytics(run_output, agent, session):
    """
    This post-hook runs synchronously (no decorator).
    It will block the response until complete.
    """
    print(f"[Sync Post-Hook] Logging analytics for run: {run_output.run_id}")
    print(f"[Sync Post-Hook] Agent: {agent.name}")
    print(f"[Sync Post-Hook] Session: {session.session_id}")
    print("[Sync Post-Hook] Analytics logged successfully!")


@hook(run_in_background=True)
async def send_notification(run_output, agent):
    """
    This post-hook runs in the background (has decorator).
    It won't block the API response.
    """
    print(f"[Background Post-Hook] Sending notification for agent: {agent.name}")
    await asyncio.sleep(3)
    print("[Background Post-Hook] Notification sent!")


# Create an agent with mixed hooks
agent = Agent(
    id="background-task-agent",
    name="BackgroundTaskAgent",
    model=OpenAIChat(id="gpt-4o-mini"),
    instructions="You are a helpful assistant",
    db=SqliteDb(db_file="tmp/agent.db"),
    pre_hooks=[log_request],  # Runs in background
    post_hooks=[log_analytics, send_notification],  # log_analytics is sync, send_notification is background
    markdown=True,
)

# Create AgentOS (run_hooks_in_background is False by default)
agent_os = AgentOS(
    agents=[agent],
)

# Get the FastAPI app
app = agent_os.get_app()

if __name__ == "__main__":
    agent_os.serve(app="background_hooks_decorator:app", port=7777, reload=True)
3

Create a virtual environment

Open the Terminal and create a python virtual environment.
python3 -m venv .venv
source .venv/bin/activate
4

Install libraries

pip install -U agno openai uvicorn
5

Export your OpenAI API key

export OPENAI_API_KEY="your_openai_api_key_here"
6

Run the server

python background_hooks_decorator.py
7

Test the endpoint

curl -X POST http://localhost:7777/agents/background-task-agent/runs \
  -F "message=Hello, how are you?" \
  -F "stream=false"
The response will be returned after log_analytics completes. Check the server logs to see log_request and send_notification executing in the background.

What Happens

  1. The agent processes the request
  2. log_analytics runs synchronously (blocks the response)
  3. The response is sent to the user
  4. log_request and send_notification run in the background
  5. The user only waits for log_analytics to complete

Comparison: Global vs Per-Hook

ApproachUse Case
AgentOS(run_hooks_in_background=True)All hooks are non-critical, maximize response speed
@hook(run_in_background=True)Mix of critical (sync) and non-critical (background) hooks
Use the @hook decorator when you have hooks that must complete before the response (e.g., output validation) alongside hooks that can run later (e.g., notifications).