Agno Teams support various forms of input and output. The most standard pattern is to use str input and str output.
from agno.models.openai import OpenAIChat
from agno.team import Team

team = Team(
    members=[],
    model=OpenAIChat(id="gpt-5-mini"),
    description="You write movie scripts.",
)

response = team.run("Write movie script about a girl living in New York")
print(response.content)

Structured Output

Teams can be used to generate structured data (i.e. a pydantic model). This is generally called “Structured Output”. Use this feature to extract features, classify data, produce fake data etc. The best part is that they work with function calls, knowledge bases and all other features. Let’s create an Movie Agent to write a MovieScript for us.
1

Structured Output example

structured_output_team.py
from pydantic import BaseModel
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.team import Team
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.utils.pprint import pprint_run_response


class StockAnalysis(BaseModel):
    symbol: str
    company_name: str
    analysis: str

class CompanyAnalysis(BaseModel):
    company_name: str
    analysis: str

stock_searcher = Agent(
    name="Stock Searcher",
    model=OpenAIChat("gpt-5-mini"),
    output_schema=StockAnalysis,
    role="Searches for information on stocks and provides price analysis.",
    tools=[
        DuckDuckGoTools()
    ],
)

company_info_agent = Agent(
    name="Company Info Searcher",
    model=OpenAIChat("gpt-5-mini"),
    role="Searches for information about companies and recent news.",
    output_schema=CompanyAnalysis,
    tools=[
        DuckDuckGoTools()
    ],
)

class StockReport(BaseModel):
    symbol: str
    company_name: str
    analysis: str

team = Team(
    name="Stock Research Team",
    model=OpenAIChat("gpt-5-mini"),
    members=[stock_searcher, company_info_agent],
    output_schema=StockReport,
    markdown=True,
)

# This should route to the stock_searcher
response = team.run("What is the current stock price of NVDA?")
assert isinstance(response.content, StockReport)
pprint_run_response(response)
2

Run the example

Install libraries
pip install openai agno ddgs
Export your key
export OPENAI_API_KEY=xxx
Run the example
python structured_output_team.py
The output is an object of the StockReport class, here’s how it looks:
StockReport(
"symbol": "NVDA",                                                                                                                                     │
"company_name": "NVIDIA Corp",                                                                                                                        │
"analysis": "NVIDIA Corp (NVDA) remains a leading player in the AI chip market, ..."
)
Some LLMs are not able to generate structured output. Agno has an option to tell the model to respond as JSON. Although this is typically not as accurate as structured output, it can be useful in some cases.If you want to use JSON mode, you can set use_json_mode=True on the Agent.
team = Team(
  model=OpenAIChat(id="gpt-5-mini"),
  members=[stock_searcher, company_info_agent],
  description="You write stock reports.",
  output_schema=StockReport,
  use_json_mode=True,
)

Streaming Structured Output

Streaming can be used in combination with output_schema. This returns the structured output as a single RunContent event in the stream of events.
1

Streaming Structured Output example

streaming_structured_output_team.py
import asyncio
from typing import Dict, List

from agno.agent import Agent
from agno.models.openai.chat import OpenAIChat
from pydantic import BaseModel, Field


class MovieScript(BaseModel):
    setting: str = Field(
        ..., description="Provide a nice setting for a blockbuster movie."
    )
    ending: str = Field(
        ...,
        description="Ending of the movie. If not available, provide a happy ending.",
    )
    genre: str = Field(
        ...,
        description="Genre of the movie. If not available, select action, thriller or romantic comedy.",
    )
    name: str = Field(..., description="Give a name to this movie")
    characters: List[str] = Field(..., description="Name of characters for this movie.")
    storyline: str = Field(
        ..., description="3 sentence storyline for the movie. Make it exciting!"
    )
    rating: Dict[str, int] = Field(
        ...,
        description="Your own rating of the movie. 1-10. Return a dictionary with the keys 'story' and 'acting'.",
    )


# Agent that uses structured outputs with streaming
structured_output_team = Team(
    members=[],
    model=OpenAIChat(id="gpt-5-mini"),
    description="You write movie scripts.",
    output_schema=MovieScript,
)

structured_output_team.print_response(
    "New York", stream=True, stream_intermediate_steps=True
)
2

Run the example

Install libraries
pip install openai agno ddgs
Export your key
export OPENAI_API_KEY=xxx
Run the example
python streaming_structured_output_team.py

Structured Input

A team can be provided with structured input (i.e a pydantic model) by passing it in the Team.run() or Team.print_response() as the input parameter.
1

Structured Input example

structured_input_team.py
from typing import List

from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.team.team import Team
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.tools.hackernews import HackerNewsTools
from pydantic import BaseModel, Field


class ResearchProject(BaseModel):
    """Structured research project with validation requirements."""

    project_name: str = Field(description="Name of the research project")
    research_topics: List[str] = Field(
        description="List of topics to research", min_items=1
    )
    target_audience: str = Field(description="Intended audience for the research")
    depth_level: str = Field(
        description="Research depth level", pattern="^(basic|intermediate|advanced)$"
    )
    max_sources: int = Field(
        description="Maximum number of sources to use", ge=3, le=20, default=10
    )
    include_recent_only: bool = Field(
        description="Whether to focus only on recent sources", default=True
    )


# Create research agents
hackernews_agent = Agent(
    name="HackerNews Researcher",
    model=OpenAIChat(id="gpt-5-mini"),
    tools=[HackerNewsTools()],
    role="Research trending topics and discussions on HackerNews",
    instructions=[
        "Search for relevant discussions and articles",
        "Focus on high-quality posts with good engagement",
        "Extract key insights and technical details",
    ],
)

web_researcher = Agent(
    name="Web Researcher",
    model=OpenAIChat(id="gpt-5-mini"),
    tools=[DuckDuckGoTools()],
    role="Conduct comprehensive web research",
    instructions=[
        "Search for authoritative sources and documentation",
        "Find recent articles and blog posts",
        "Gather diverse perspectives on the topics",
    ],
)

# Create team with input_schema for automatic validation
research_team = Team(
    name="Research Team with Input Validation",
    model=OpenAIChat(id="gpt-5-mini"),
    members=[hackernews_agent, web_researcher],
    instructions=[
        "Conduct thorough research based on the validated input",
        "Coordinate between team members to avoid duplicate work",
        "Ensure research depth matches the specified level",
        "Respect the maximum sources limit",
        "Focus on recent sources if requested",
    ],
)

research_request = ResearchProject(
    project_name="Blockchain Development Tools",
    research_topics=["Ethereum", "Solana", "Web3 Libraries"],
    target_audience="Blockchain Developers",
    depth_level="advanced",
    max_sources=12,
    include_recent_only=False,
)

research_team.print_response(input=research_request)
2

Run the example

Install libraries
pip install openai agno ddgs
Export your key
export OPENAI_API_KEY=xxx
Run the example
python structured_input_team.py

Validating the input

You can set input_schema on the Team to validate the input. If you then pass the input as a dictionary, it will be automatically validated against the schema.
1

Validating the input example

validating_input_team.py
from typing import List

from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.team.team import Team
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.tools.hackernews import HackerNewsTools
from pydantic import BaseModel, Field


class ResearchProject(BaseModel):
    """Structured research project with validation requirements."""

    project_name: str = Field(description="Name of the research project")
    research_topics: List[str] = Field(
        description="List of topics to research", min_items=1
    )
    target_audience: str = Field(description="Intended audience for the research")
    depth_level: str = Field(
        description="Research depth level", pattern="^(basic|intermediate|advanced)$"
    )
    max_sources: int = Field(
        description="Maximum number of sources to use", ge=3, le=20, default=10
    )
    include_recent_only: bool = Field(
        description="Whether to focus only on recent sources", default=True
    )


# Create research agents
hackernews_agent = Agent(
    name="HackerNews Researcher",
    model=OpenAIChat(id="gpt-5-mini"),
    tools=[HackerNewsTools()],
    role="Research trending topics and discussions on HackerNews",
    instructions=[
        "Search for relevant discussions and articles",
        "Focus on high-quality posts with good engagement",
        "Extract key insights and technical details",
    ],
)

web_researcher = Agent(
    name="Web Researcher",
    model=OpenAIChat(id="gpt-5-mini"),
    tools=[DuckDuckGoTools()],
    role="Conduct comprehensive web research",
    instructions=[
        "Search for authoritative sources and documentation",
        "Find recent articles and blog posts",
        "Gather diverse perspectives on the topics",
    ],
)

# Create team with input_schema for automatic validation
research_team = Team(
    name="Research Team with Input Validation",
    model=OpenAIChat(id="gpt-5-mini"),
    members=[hackernews_agent, web_researcher],
    input_schema=ResearchProject,
    instructions=[
        "Conduct thorough research based on the validated input",
        "Coordinate between team members to avoid duplicate work",
        "Ensure research depth matches the specified level",
        "Respect the maximum sources limit",
        "Focus on recent sources if requested",
    ],
)

research_team.print_response(
    input={
        "project_name": "AI Framework Comparison 2024",
        "research_topics": ["LangChain", "CrewAI", "AutoGen", "Agno"],
        "target_audience": "AI Engineers and Developers",
        "depth_level": "intermediate",
        "max_sources": 15,
        "include_recent_only": True,
    }
)
2

Run the example

Install libraries
pip install openai agno ddgs
Export your key
export OPENAI_API_KEY=xxx
Run the example
python validating_input_team.py

Using a Parser Model

You can use a different model to parse and structure the output from your primary model. This approach is particularly effective when the primary model is optimized for reasoning tasks, as such models may not consistently produce detailed structured responses.
team = Team(
    model=Claude(id="claude-sonnet-4-20250514"),  # The main processing model
    members=[...],
    description="You write movie scripts.",
    output_schema=MovieScript,
    parser_model=OpenAIChat(id="gpt-5-mini"),  # Only used to parse the output
)
You can also provide a custom parser_model_prompt to your Parser Model to customize the model’s instructions.

Using an Output Model

You can use a different model to produce the run output of the team. This is useful when the primary model is optimized for image analysis, for example, but you want a different model to produce a structured output response.
team = Team(
    model=Gemini(id="gemini-2.0-flash-001"),  # The main processing model
    description="You write movie scripts.",
    output_schema=MovieScript,
    output_model=OpenAIChat(id="gpt-5-mini"),  # Only used to parse the output
    members=[...],
)
You can also provide a custom output_model_prompt to your Output Model to customize the model’s instructions.
Gemini models often reject requests to use tools and produce structured output at the same time. Using an Output Model is an effective workaround for this.

Developer Resources