The Team.run() function runs the team and generates a response, either as a TeamRunOutput object or a stream of TeamRunOutputEvent objects.
Many of our examples use team.print_response() which is a helper utility to print the response in the terminal. It uses team.run() under the hood.

Running your Team

Here’s how to run your team. The response is captured in the response and response_stream variables.
from agno.team import Team
from agno.models.openai import OpenAIChat

agent_1 = Agent(name="News Agent", role="Get the latest news")

agent_2 = Agent(name="Weather Agent", role="Get the weather for the next 7 days")

team = Team(name="News and Weather Team", members=[agent_1, agent_2])

# Synchronous execution
result = team.run("What is the weather in Tokyo?")

# Asynchronous execution
result = await team.arun("What is the weather in Tokyo?")
You can also run the agent asynchronously using the Team.arun() method.
For development purposes, you can also print the response in the terminal using the Team.print_response() method.
team.print_response("What is the weather in Tokyo?")

# Or for streaming
team.print_response("What is the weather in Tokyo?", stream=True)
The Team.print_response() method is a helper method that uses the Team.run() method under the hood. This is only for convenience during development and not recommended for production use.See the Team class reference for more details.

Typed inputs and outputs

A team can be provided with typed input (i.e a pydantic model) by passing it in the Team.run() or Team.print_response() as the input parameter.
structured_input_team.py
from typing import List

from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.team.team import Team
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.tools.hackernews import HackerNewsTools
from pydantic import BaseModel, Field


class ResearchProject(BaseModel):
    """Structured research project with validation requirements."""

    project_name: str = Field(description="Name of the research project")
    research_topics: List[str] = Field(
        description="List of topics to research", min_items=1
    )
    target_audience: str = Field(description="Intended audience for the research")
    depth_level: str = Field(
        description="Research depth level", pattern="^(basic|intermediate|advanced)$"
    )
    max_sources: int = Field(
        description="Maximum number of sources to use", ge=3, le=20, default=10
    )
    include_recent_only: bool = Field(
        description="Whether to focus only on recent sources", default=True
    )


# Create research agents
hackernews_agent = Agent(
    name="HackerNews Researcher",
    model=OpenAIChat(id="gpt-5-mini"),
    tools=[HackerNewsTools()],
    role="Research trending topics and discussions on HackerNews",
    instructions=[
        "Search for relevant discussions and articles",
        "Focus on high-quality posts with good engagement",
        "Extract key insights and technical details",
    ],
)

web_researcher = Agent(
    name="Web Researcher",
    model=OpenAIChat(id="gpt-5-mini"),
    tools=[DuckDuckGoTools()],
    role="Conduct comprehensive web research",
    instructions=[
        "Search for authoritative sources and documentation",
        "Find recent articles and blog posts",
        "Gather diverse perspectives on the topics",
    ],
)

# Create team with input_schema for automatic validation
research_team = Team(
    name="Research Team with Input Validation",
    model=OpenAIChat(id="gpt-5-mini"),
    members=[hackernews_agent, web_researcher],
    instructions=[
        "Conduct thorough research based on the validated input",
        "Coordinate between team members to avoid duplicate work",
        "Ensure research depth matches the specified level",
        "Respect the maximum sources limit",
        "Focus on recent sources if requested",
    ],
)

research_request = ResearchProject(
    project_name="Blockchain Development Tools",
    research_topics=["Ethereum", "Solana", "Web3 Libraries"],
    target_audience="Blockchain Developers",
    depth_level="advanced",
    max_sources=12,
    include_recent_only=False,
)

research_team.print_response(input=research_request)
You can set the input_schema on the team to validate the input. See more details in the Input and Output documentation.
In addition, you can set the output_schema on the team to specify typed output.
structured_output_team.py
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.team.team import Team
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.tools.hackernews import HackerNewsTools
from pydantic import BaseModel, Field


# Create research agents
hackernews_agent = Agent(
    name="HackerNews Researcher",
    model=OpenAIChat(id="gpt-5-mini"),
    tools=[HackerNewsTools()],
    role="Research trending topics and discussions on HackerNews",
    instructions=[
        "Search for relevant discussions and articles",
        "Focus on high-quality posts with good engagement",
        "Extract key insights and technical details",
    ],
)

web_researcher = Agent(
    name="Web Researcher",
    model=OpenAIChat(id="gpt-5-mini"),
    tools=[DuckDuckGoTools()],
    role="Conduct comprehensive web research",
    instructions=[
        "Search for authoritative sources and documentation",
        "Find recent articles and blog posts",
        "Gather diverse perspectives on the topics",
    ],
)


class ResearchReport(BaseModel):
    """Structured research project with validation requirements."""

    title: str = Field(description="Title of the research report")
    subtitle: str = Field(description="Subtitle of the research report")
    content: str = Field(description="Content of the research report")

# Create team with input_schema for automatic validation
research_team = Team(
    name="Research Team with Input Validation",
    model=OpenAIChat(id="gpt-5-mini"),
    members=[hackernews_agent, web_researcher],
    output_schema=ResearchReport,
    instructions=[
        "Conduct thorough research based on the validated input",
        "Coordinate between team members to avoid duplicate work",
        "Ensure research depth matches the specified level",
        "Respect the maximum sources limit",
        "Focus on recent sources if requested",
    ],
)

research_team.print_response("Latest happening in the world of AI")
See more details in the Input and Output documentation.

RunOutput

The Team.run() function returns a TeamRunOutput object when not streaming. Here are some of the core attributes:
  • run_id: The id of the run.
  • team_id: The id of the team.
  • team_name: The name of the team.
  • session_id: The id of the session.
  • user_id: The id of the user.
  • content: The response content.
  • content_type: The type of content. In the case of structured output, this will be the class name of the pydantic model.
  • reasoning_content: The reasoning content.
  • messages: The list of messages sent to the model.
  • metrics: The metrics of the run. For more details see Metrics.
  • model: The model used for the run.
  • member_responses: The list of member responses. Optional to add when store_member_responses=True on the Team.
See detailed documentation in the TeamRunOutput documentation.

Streaming Responses

To enable streaming, set stream=True when calling run(). This will return an iterator of TeamRunOutputEvent objects instead of a single response.
from agno.team import Team
from agno.models.openai import OpenAIChat

agent_1 = Agent(name="News Agent", role="Get the latest news")

agent_2 = Agent(name="Weather Agent", role="Get the weather for the next 7 days")

team = Team(name="News and Weather Team", members=[agent_1, agent_2])

# Synchronous execution
for chunk in team.run("What is the weather in Tokyo?", stream=True, stream_intermediate_steps=True):
    print(chunk.content, end="", flush=True)

# Asynchronous execution
async for chunk in team.arun("What is the weather in Tokyo?", stream=True, stream_intermediate_steps=True):
    print(chunk.content, end="", flush=True)

Streaming Intermediate Steps

Throughout the execution of a team, multiple events take place, and we provide these events in real-time for enhanced team transparency. You can enable streaming of intermediate steps by setting stream_intermediate_steps=True.
# Stream with intermediate steps
response_stream = team.run(
    "What is the weather in Tokyo?",
    stream=True,
    stream_intermediate_steps=True
)

Handling Events

You can process events as they arrive by iterating over the response stream:
response_stream = team.run("Your prompt", stream=True, stream_intermediate_steps=True)

for event in response_stream:
    if event.event == "TeamRunContent":
        print(f"Content: {event.content}")
    elif event.event == "TeamToolCallStarted":
        print(f"Tool call started: {event.tool}")
    elif event.event == "ToolCallStarted":
        print(f"Member tool call started: {event.tool}")
    elif event.event == "ToolCallCompleted":
        print(f"Member tool call completed: {event.tool}")
    elif event.event == "TeamReasoningStep":
        print(f"Reasoning step: {event.content}")
    ...
Team member events are yielded during team execution when a team member is being executed. You can disable this by setting stream_member_events=False.

Storing Events

You can store all the events that happened during a run on the RunOutput object.
from agno.team import Team
from agno.models.openai import OpenAIChat
from agno.utils.pprint import pprint_run_response

team = Team(model=OpenAIChat(id="gpt-5-mini"), members=[], store_events=True)

response = team.run("Tell me a 5 second short story about a lion", stream=True, stream_intermediate_steps=True)
pprint_run_response(response)

for event in response.events:
    print(event.event)
By default the TeamRunContentEvent and RunContentEvent events are not stored. You can modify which events are skipped by setting the events_to_skip parameter. For example:
team = Team(model=OpenAIChat(id="gpt-5-mini"), members=[], store_events=True, events_to_skip=[TeamRunEvent.run_started.value])

Event Types

The following events are sent by the Team.run() and Team.arun() functions depending on team’s configuration:

Core Events

Event TypeDescription
TeamRunStartedIndicates the start of a run
TeamRunContentContains the model’s response text as individual chunks
TeamRunCompletedSignals successful completion of the run
TeamRunErrorIndicates an error occurred during the run
TeamRunCancelledSignals that the run was cancelled

Tool Events

Event TypeDescription
TeamToolCallStartedIndicates the start of a tool call
TeamToolCallCompletedSignals completion of a tool call, including tool call results

Reasoning Events

Event TypeDescription
TeamReasoningStartedIndicates the start of the team’s reasoning process
TeamReasoningStepContains a single step in the reasoning process
TeamReasoningCompletedSignals completion of the reasoning process

Memory Events

Event TypeDescription
TeamMemoryUpdateStartedIndicates that the team is updating its memory
TeamMemoryUpdateCompletedSignals completion of a memory update

Parser Model events

Event TypeDescription
TeamParserModelResponseStartedIndicates the start of the parser model response
TeamParserModelResponseCompletedSignals completion of the parser model response

Output Model events

Event TypeDescription
TeamOutputModelResponseStartedIndicates the start of the output model response
TeamOutputModelResponseCompletedSignals completion of the output model response
See detailed documentation in the TeamRunOutput documentation.

Custom Events

If you are using your own custom tools, it will often be useful to be able to yield custom events. Your custom events will be yielded together with the rest of the expected Agno events. We recommend creating your custom event class extending the built-in CustomEvent class:
from dataclasses import dataclass
from agno.run.team import CustomEvent

@dataclass
class CustomerProfileEvent(CustomEvent):
    """CustomEvent for customer profile."""

    customer_name: Optional[str] = None
    customer_email: Optional[str] = None
    customer_phone: Optional[str] = None
You can then yield your custom event from your tool. The event will be handled internally as an Agno event, and you will be able to access it in the same way you would access any other Agno event.
from agno.tools import tool

@tool()
async def get_customer_profile():
    """Example custom tool that simply yields a custom event."""

    yield CustomerProfileEvent(
        customer_name="John Doe",
        customer_email="john.doe@example.com",
        customer_phone="1234567890",
    )
See the full example for more details.

Interactive CLI

You can also interact with the team via a CLI.
team.cli_app(input="What is the weather in Tokyo?", stream=True)
See the Team class reference for more details.

Developer Resources