Documentation Index
Fetch the complete documentation index at: https://docs.agno.com/llms.txt
Use this file to discover all available pages before exploring further.
Integrating Agno with Traceloop
Traceloop provides an LLM observability platform built on OpenLLMetry, an open-source OpenTelemetry extension. By integrating Agno with Traceloop, you can automatically trace agent execution, team workflows, tool calls, and token usage metrics.
Prerequisites
-
Install Dependencies
Ensure you have the necessary packages installed:
uv pip install agno openai traceloop-sdk
-
Setup Traceloop Account
-
Set Environment Variables
Configure your environment with the Traceloop API key:
export TRACELOOP_API_KEY=<your-api-key>
Sending Traces to Traceloop
-
Example: Basic Agent Instrumentation
Initialize Traceloop at the start of your application. The SDK automatically instruments Agno agent execution.
from traceloop.sdk import Traceloop
from agno.agent import Agent
from agno.models.openai import OpenAIResponses
# Initialize Traceloop - must be called before creating agents
Traceloop.init(app_name="agno_agent")
# Create and configure the agent
agent = Agent(
name="Assistant",
model=OpenAIResponses(id="gpt-5.2"),
description="A helpful assistant",
instructions=["Be concise and helpful"],
)
# Agent execution is automatically traced
response = agent.run("What is the capital of France?")
print(response.content)
-
Example: Development Mode (Disable Batching)
For local development, disable batching to see traces immediately:
from traceloop.sdk import Traceloop
from agno.agent import Agent
from agno.models.openai import OpenAIResponses
# Disable batching for immediate trace visibility during development
Traceloop.init(app_name="agno_dev", disable_batch=True)
# Create and configure the agent
agent = Agent(
name="DevAgent",
model=OpenAIResponses(id="gpt-5.2"),
)
agent.print_response("Hello, world!")
-
Example: Multi-Agent Team Tracing
Team execution is automatically traced, showing the coordination between multiple agents:
from traceloop.sdk import Traceloop
from agno.agent import Agent
from agno.models.openai import OpenAIResponses
from agno.team import Team
Traceloop.init(app_name="agno_team")
researcher = Agent(
name="Researcher",
role="Research Specialist",
model=OpenAIResponses(id="gpt-5.2"),
instructions=["Research topics thoroughly and provide factual information"],
debug_mode=True,
)
writer = Agent(
name="Writer",
role="Content Writer",
model=OpenAIResponses(id="gpt-5.2"),
instructions=["Write clear, engaging content based on research"],
debug_mode=True,
)
team = Team(
name="ContentTeam",
members=[researcher, writer],
model=OpenAIResponses(id="gpt-5.2"),
debug_mode=True,
)
# Team execution creates parent span with child spans for each agent
result = team.run("Write a brief overview of OpenTelemetry observability")
print(result.content)
-
Example: Using Workflow Decorators
Use the @workflow decorator to create custom spans for organizing your traces:
from traceloop.sdk import Traceloop
from traceloop.sdk.decorators import workflow
from agno.agent import Agent
from agno.models.openai import OpenAIResponses
Traceloop.init(app_name="agno_workflows")
agent = Agent(
name="AnalysisAgent",
model=OpenAIResponses(id="gpt-5.2"),
debug_mode=True,
)
@workflow(name="data_analysis_pipeline")
def analyze_data(query: str) -> str:
"""Custom workflow that wraps agent execution."""
response = agent.run(query)
return response.content
# The workflow decorator creates a parent span
result = analyze_data("Analyze the benefits of observability in AI systems")
print(result)
Async agent execution is fully supported with automatic tool call tracing:
import asyncio
from traceloop.sdk import Traceloop
from agno.agent import Agent
from agno.models.openai import OpenAIResponses
Traceloop.init(app_name="agno_async")
def get_weather(city: str) -> str:
"""Get the weather for a city."""
return f"The weather in {city} is sunny, 72°F"
agent = Agent(
name="WeatherAgent",
model=OpenAIResponses(id="gpt-5.2"),
tools=[get_weather],
debug_mode=True,
)
async def main():
# Async execution is automatically traced
response = await agent.arun("What's the weather in San Francisco?")
print(response.content)
asyncio.run(main())
Notes
- Initialization: Call
Traceloop.init() before creating any agents to ensure proper instrumentation.
- Development Mode: Use
disable_batch=True during development for immediate trace visibility.
- Async Support: Both sync (
run()) and async (arun()) methods are fully instrumented.
- Privacy Control: Set
TRACELOOP_TRACE_CONTENT=false to disable logging of prompts and completions.