Workflows support multiple input types for maximum flexibility:
Input Type | Example | Use Case |
---|
String | "Analyze AI trends" | Simple text prompts |
Pydantic Model | ResearchRequest(topic="AI", depth=5) | Type-safe structured input |
List | ["AI", "ML", "LLMs"] | Multiple items to process |
Dictionary | {"query": "AI", "sources": ["web", "academic"]} | Key-value pairs |
When this input is passed to an Agent
or Team
, it will be serialized to a
string before being passed to the agent or team.
See more on Pydantic as input in the Advanced Workflows documentation.
Leverage Pydantic models for type-safe, validated workflow inputs:
from pydantic import BaseModel, Field
class ResearchRequest(BaseModel):
topic: str = Field(description="Research topic")
depth: int = Field(description="Research depth (1-10)")
sources: List[str] = Field(description="Preferred sources")
workflow.print_response(
message=ResearchRequest(
topic="AI trends 2024",
depth=8,
sources=["academic", "industry"]
)
)
You can set input_schema
on the Workflow to validate the input. If you then pass the input as a dictionary, it will be automatically validated against the schema.
class ResearchTopic(BaseModel):
"""Structured research topic with specific requirements"""
topic: str
focus_areas: List[str] = Field(description="Specific areas to focus on")
target_audience: str = Field(description="Who this research is for")
sources_required: int = Field(description="Number of sources needed", default=5)
workflow = Workflow(
name="Content Creation Workflow",
description="Automated content creation from blog posts to social media",
db=SqliteDb(
session_table="workflow_session",
db_file="tmp/workflow.db",
),
steps=[research_step, content_planning_step],
input_schema=ResearchTopic,
)
workflow.print_response(
input={
"topic": "AI trends in 2024",
"focus_areas": ["Machine Learning", "Computer Vision"],
"target_audience": "Tech professionals",
"sources_required": 8
},
markdown=True,
)
Developer Resources
Workflows feature a powerful type-safe data flow system enabling each step to:
- Receive structured input (Pydantic models, lists, dicts, or raw strings)
- Produce structured output (validated Pydantic models)
- Maintain type safety throughout entire workflow execution
Data Flow Between Steps
Input Processing
- First step receives the workflow’s input message
- Subsequent steps receive the previous step’s structured output
Output Generation
- Each Agent processes input using its configured
output_schema
- Output is automatically validated against the defined model
# Define agents with response models
research_agent = Agent(
name="Research Specialist",
model=OpenAIChat(id="gpt-4"),
output_schema=ResearchFindings, # <-- Set on Agent
)
analysis_agent = Agent(
name="Analysis Expert",
model=OpenAIChat(id="gpt-4"),
output_schema=AnalysisResults, # <-- Set on Agent
)
# Steps reference these agents
workflow = Workflow(steps=[
Step(agent=research_agent), # Will output ResearchFindings
Step(agent=analysis_agent) # Will output AnalysisResults
])
Custom functions can access structured output from previous steps via step_input.previous_step_content
, preserving original Pydantic model types.
Transformation Pattern
- Type-Check Inputs: Use
isinstance(step_input.previous_step_content, ModelName)
to verify input structure
- Modify Data: Extract fields, process them, and construct new Pydantic models
- Return Typed Output: Wrap the new model in
StepOutput(content=new_model)
for type safety
Example Implementation
def transform_data(step_input: StepInput) -> StepOutput:
research = step_input.previous_step_content # Type: ResearchFindings
analysis = AnalysisReport(
analysis_type="Custom",
key_findings=[f"Processed: {research.topic}"],
... # Modified fields
)
return StepOutput(content=analysis)
Developer Resources
Workflows seamlessly handle media artifacts (images, videos, audio) throughout the execution pipeline, enabling rich multimedia processing workflows.
Media Flow System
- Input Support: Media can be provided to
Workflow.run()
and Workflow.print_response()
- Step Propagation: Media is passed through to individual steps (Agents, Teams, or Custom Functions)
- Artifact Accumulation: Each step receives shared media from previous steps and can produce additional outputs
- Format Compatibility: Automatic conversion between artifact formats ensures seamless integration
- Complete Preservation: Final
WorkflowRunOutput
contains all accumulated media from the entire execution chain
Here’s an example of how to pass image as input:
from agno.agent import Agent
from agno.media import Image
from agno.models.openai import OpenAIChat
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.workflow import Step, Workflow
from agno.db.sqlite import SqliteDb
# Define agents
image_analyzer = Agent(
name="Image Analyzer",
model=OpenAIChat(id="gpt-5-mini"),
instructions="Analyze the provided image and extract key details, objects, and context.",
)
news_researcher = Agent(
name="News Researcher",
model=OpenAIChat(id="gpt-5-mini"),
tools=[DuckDuckGoTools()],
instructions="Search for latest news and information related to the analyzed image content.",
)
# Define steps
analysis_step = Step(
name="Image Analysis Step",
agent=image_analyzer,
)
research_step = Step(
name="News Research Step",
agent=news_researcher,
)
# Create workflow with media input
media_workflow = Workflow(
name="Image Analysis and Research Workflow",
description="Analyze an image and research related news",
steps=[analysis_step, research_step],
db=SqliteDb(db_file="tmp/workflow.db"),
)
# Run workflow with image input
if __name__ == "__main__":
media_workflow.print_response(
message="Please analyze this image and find related news",
images=[
Image(url="https://upload.wikimedia.org/wikipedia/commons/0/0c/GoldenGateBridge-001.jpg")
],
markdown=True,
)
If you are using Workflow.run()
, you need to use WorkflowRunOutput
to access the images, videos, and audio.from agno.run.workflow import WorkflowRunOutput
response: WorkflowRunOutput = media_workflow.run(
message="Please analyze this image and find related news",
images=[
Image(url="https://upload.wikimedia.org/wikipedia/commons/0/0c/GoldenGateBridge-001.jpg")
],
markdown=True,
)
print(response.images)
Similarly, you can pass Video
and Audio
as input.
Developer Resources