Skip to main content
Run your team with Team.run() (sync) or Team.arun() (async).

Basic Execution

from agno.team import Team
from agno.agent import Agent
from agno.models.openai import OpenAIResponses
from agno.tools.hackernews import HackerNewsTools
from agno.tools.yfinance import YFinanceTools
from agno.utils.pprint import pprint_run_response

news_agent = Agent(name="News Agent", role="Get tech news", tools=[HackerNewsTools()])
finance_agent = Agent(name="Finance Agent", role="Get stock data", tools=[YFinanceTools()])

team = Team(
    name="Research Team",
    members=[news_agent, finance_agent],
    model=OpenAIResponses(id="gpt-4o")
)

# Run and get response
response = team.run("What are the trending AI stories?")
print(response.content)

# Run with streaming
stream = team.run("What are the trending AI stories?", stream=True)
for chunk in stream:
    print(chunk.content, end="", flush=True)

Execution Flow

When you call run():
  1. Pre-hooks execute (if configured)
  2. Reasoning runs (if enabled) to plan the task
  3. Context is built with system message, history, memories, and session state
  4. Model decides whether to respond directly, use tools, or delegate to members
  5. Members execute their tasks (concurrently in async mode)
  6. Leader synthesizes member results into a final response
  7. Post-hooks execute (if configured)
  8. Session and metrics are stored (if database configured)
Team execution flow

Streaming

Enable streaming with stream=True. This returns an iterator of events instead of a single response.
stream = team.run("What are the top AI stories?", stream=True)
for chunk in stream:
    print(chunk.content, end="", flush=True)

Stream All Events

By default, only content is streamed. Set stream_events=True to get tool calls, reasoning steps, and other internal events:
stream = team.run(
    "What are the trending AI stories?",
    stream=True,
    stream_events=True
)

for event in stream:
    if event.event == TeamRunEvent.run_content:
        print(event.content, end="", flush=True)
    elif event.event == TeamRunEvent.tool_call_started:
        print(f"Tool call started")
    elif event.event == TeamRunEvent.tool_call_completed:
        print(f"Tool call completed")

Stream Member Events

When using arun() with multiple members, they execute concurrently. Member events arrive as they happen, not in order. Disable member event streaming with stream_member_events=False:
team = Team(
    name="Research Team",
    members=[news_agent, finance_agent],
    model=OpenAIResponses(id="gpt-4o"),
    stream_member_events=False
)

Run Output

Team.run() returns a TeamRunOutput object containing:
FieldDescription
contentThe final response text
messagesAll messages sent to the model
metricsToken usage, execution time, etc.
member_responsesResponses from delegated members
See TeamRunOutput reference for the full schema.

Async Execution

Use arun() for async execution. Members run concurrently when the leader delegates to multiple members at once.
import asyncio

async def main():
    response = await team.arun("Research AI trends and stock performance")
    print(response.content)

asyncio.run(main())

Specifying User and Session

Associate runs with a user and session for history tracking:
team.run(
    "Get my monthly report",
    user_id="[email protected]",
    session_id="session_123"
)
See Sessions for details.

Passing Files

Pass images, audio, video, or files to the team:
from agno.media import Image

team.run(
    "Analyze this image",
    images=[Image(url="https://example.com/image.jpg")]
)
See Multimodal for details.

Structured Output

Pass an output schema to get structured responses:
from pydantic import BaseModel

class Report(BaseModel):
    overview: str
    findings: list[str]

response = team.run("Analyze the market", output_schema=Report)
See Input & Output for details.

Cancelling Runs

Cancel a running team with Team.cancel_run(). See Run Cancellation. For development, use print_response() to display formatted output:
team.print_response("What are the top AI stories?", stream=True)

# Show member responses too
team.print_response("What are the top AI stories?", show_members_responses=True)

Core Events

EventDescription
TeamRunStartedRun started
TeamRunContentResponse text chunk
TeamRunContentCompletedContent streaming complete
TeamRunCompletedRun completed successfully
TeamRunErrorError occurred
TeamRunCancelledRun was cancelled

Tool Events

EventDescription
TeamToolCallStartedTool call started
TeamToolCallCompletedTool call completed

Reasoning Events

EventDescription
TeamReasoningStartedReasoning started
TeamReasoningStepSingle reasoning step
TeamReasoningCompletedReasoning completed

Memory Events

EventDescription
TeamMemoryUpdateStartedMemory update started
TeamMemoryUpdateCompletedMemory update completed

Hook Events

EventDescription
TeamPreHookStartedPre-hook started
TeamPreHookCompletedPre-hook completed
TeamPostHookStartedPost-hook started
TeamPostHookCompletedPost-hook completed

Developer Resources