The Agent.run() function runs the agent and generates a response, either as a RunResponse object or a stream of RunResponse objects.

Many of our examples use agent.print_response() which is a helper utility to print the response in the terminal. It uses agent.run() under the hood.

Here’s how to run your agent. The response is captured in the response and response_stream variables.

from typing import Iterator
from agno.agent import Agent, RunResponse
from agno.models.openai import OpenAIChat
from agno.utils.pprint import pprint_run_response

agent = Agent(model=OpenAIChat(id="gpt-4o-mini"))

# Run agent and return the response as a variable
response: RunResponse = agent.run("Tell me a 5 second short story about a robot")
# Run agent and return the response as a stream
response_stream: Iterator[RunResponse] = agent.run("Tell me a 5 second short story about a lion", stream=True)

# Print the response in markdown format
pprint_run_response(response, markdown=True)
# Print the response stream in markdown format
pprint_run_response(response_stream, markdown=True)
Set stream=True to return a stream of RunResponse objects.

RunResponse

The Agent.run() function returns either a RunResponse object or an Iterator[RunResponse] when stream=True. It has the following attributes:

Understanding Metrics

For a detailed explanation of how metrics are collected and used, please refer to the Metrics Documentation.

RunResponse Attributes

AttributeTypeDefaultDescription
contentAnyNoneContent of the response.
content_typestr"str"Specifies the data type of the content.
contextList[MessageContext]NoneThe context added to the response for RAG.
eventstrRunEvent.run_response.valueEvent type of the response.
event_dataDict[str, Any]NoneData associated with the event.
messagesList[Message]NoneA list of messages included in the response.
metricsDict[str, Any]NoneUsage metrics of the run.
modelstrNoneThe model used in the run.
run_idstrNoneRun Id.
agent_idstrNoneAgent Id for the run.
session_idstrNoneSession Id for the run.
toolsList[Dict[str, Any]]NoneList of tools provided to the model.
imagesList[Image]NoneList of images the model produced.
videosList[Video]NoneList of videos the model produced.
audioList[Audio]NoneList of audio snippets the model produced.
response_audioModelResponseAudioNoneThe model’s raw response in audio.
created_atint-Unix timestamp of the response creation.
extra_dataRunResponseExtraDataNoneExtra data containing optional fields like references, add_messages, history, reasoning_steps, and reasoning_messages.

Streaming Intermediate Steps

Throughout the execution of an agent, multiple events take place, and we provide these events in real-time for enhanced agent transparency.

You can enable streaming of intermediate steps by setting stream_intermediate_steps=True.

# Stream with intermediate steps
response_stream = agent.run(
    "Tell me a 5 second short story about a lion",
    stream=True,
    stream_intermediate_steps=True
)

Event Types

The following events are sent by the Agent.run() and Agent.arun() functions depending on agent’s configuration:

Event TypeDescription
RunStartedIndicates the start of a run
RunResponseContains the model’s response text as individual chunks
RunCompletedSignals successful completion of the run
RunErrorIndicates an error occurred during the run
RunCancelledSignals that the run was cancelled
ToolCallStartedIndicates the start of a tool call
ToolCallCompletedSignals completion of a tool call. This also contains the tool call results.
ReasoningStartedIndicates the start of the agent’s reasoning process
ReasoningStepContains a single step in the reasoning process
ReasoningCompletedSignals completion of the reasoning process
UpdatingMemoryIndicates that the agent is updating its memory
WorkflowStartedIndicates the start of a workflow
WorkflowCompletedSignals completion of a workflow

You can see this behavior in action in our Playground.