Skip to main content
When you run a workflow in Agno, the response you get (WorkflowRunOutput) includes detailed metrics about the workflow execution. These metrics help you understand token usage, execution time, performance, and step-level details across all agents, teams, and custom functions in your workflow. Metrics are available at multiple levels:
  • Per workflow: Each WorkflowRunOutput includes a metrics object containing the workflow duration.
  • Per step: Each step has its own metrics including duration, token usage, and model information.
  • Per session: Session metrics aggregate all step-level metrics across all runs in the session.

Example Usage

Here’s how you can access and use workflow metrics:
from agno.agent import Agent
from agno.db.sqlite import SqliteDb
from agno.models.openai import OpenAIChat
from agno.team import Team
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.tools.hackernews import HackerNewsTools
from agno.workflow import Step, Workflow
from rich.pretty import pprint

# Define agents
hackernews_agent = Agent(
    name="Hackernews Agent",
    model=OpenAIChat(id="gpt-4o-mini"),
    tools=[HackerNewsTools()],
    role="Extract key insights from Hackernews posts",
)

web_agent = Agent(
    name="Web Agent",
    model=OpenAIChat(id="gpt-4o-mini"),
    tools=[DuckDuckGoTools()],
    role="Search the web for latest trends",
)

# Define research team
research_team = Team(
    name="Research Team",
    members=[hackernews_agent, web_agent],
    instructions="Research tech topics from Hackernews and the web",
)

content_planner = Agent(
    name="Content Planner",
    model=OpenAIChat(id="gpt-4o"),
    instructions="Plan a content schedule based on research",
)

# Create workflow
workflow = Workflow(
    name="Content Creation Workflow",
    db=SqliteDb(db_file="tmp/workflow.db"),
    steps=[
        Step(name="Research Step", team=research_team),
        Step(name="Content Planning Step", agent=content_planner),
    ],
)

# Run workflow
response = workflow.run(input="AI trends in 2024")

# Print workflow-level metrics
print("Workflow Metrics")
if response.metrics:
    pprint(response.metrics.to_dict())

# Print workflow duration
if response.metrics and response.metrics.duration:
    print(f"\nTotal execution time: {response.metrics.duration:.2f} seconds")

# Print step-level metrics
print("Step Metrics")
if response.metrics:
    for step_name, step_metrics in response.metrics.steps.items():
        print(f"\nStep: {step_name}")
        print(f"Executor: {step_metrics.executor_name} ({step_metrics.executor_type})")
        if step_metrics.metrics:
            print(f"Duration: {step_metrics.metrics.duration:.2f}s")
            print(f"Tokens: {step_metrics.metrics.total_tokens}")

# Print session metrics
print("Session Metrics")
pprint(workflow.get_session_metrics().to_dict())
You’ll see the outputs with following information: Workflow-level metrics:
  • duration: Total workflow execution time in seconds (from start to finish, including orchestration overhead)
  • steps: Dictionary mapping step names to their individual step metrics
Step-level metrics:
  • step_name: Name of the step
  • executor_type: Type of executor (“agent”, “team”, or “function”)
  • executor_name: Name of the executor
  • metrics: Execution metrics including tokens, duration, and model information (see Metrics schema)
Session metrics:
  • Aggregates step-level metrics (tokens, duration) across all runs in the session
  • Includes only agent/team execution time, not workflow orchestration overhead

Developer Resources