Documentation Index
Fetch the complete documentation index at: https://docs.agno.com/llms.txt
Use this file to discover all available pages before exploring further.
Examples of reasoning models include:
- OpenAI o1-pro and gpt-5-mini
- Claude 3.7 sonnet in extended-thinking mode
- Gemini 2.0 flash thinking
- DeepSeek-R1
Reasoning models deeply consider and think through a plan before taking action. Its all about what the model does before it starts generating a response. Reasoning models excel at single-shot use-cases. They’re perfect for solving hard problems (coding, math, physics) that don’t require multiple turns, or calling tools sequentially.
Examples
gpt-5-mini
from agno.agent import Agent
from agno.models.openai import OpenAIResponses
# Setup your Agent using a reasoning model
agent = Agent(model=OpenAIResponses(id="gpt-5.2"))
# Run the Agent
agent.print_response(
"Solve the trolley problem. Evaluate multiple ethical frameworks. Include an ASCII diagram of your solution.",
stream=True,
show_full_reasoning=True,
)
from agno.agent import Agent
from agno.models.openai import OpenAIResponses
from agno.tools.hackernews import HackerNewsTools
# Setup your Agent using a reasoning model
agent = Agent(
model=OpenAIResponses(id="gpt-5.2"),
tools=[HackerNewsTools()],
markdown=True,
)
# Run the Agent
agent.print_response("What is the best basketball team in the NBA this year?", stream=True)
gpt-5-mini with reasoning effort
gpt_5_mini_with_reasoning_effort.py
from agno.agent import Agent
from agno.models.openai import OpenAIResponses
from agno.tools.hackernews import HackerNewsTools
# Setup your Agent using a reasoning model with high reasoning effort
agent = Agent(
model=OpenAIResponses(id="gpt-5.2", reasoning_effort="high"),
tools=[HackerNewsTools()],
markdown=True,
)
# Run the Agent
agent.print_response("What is the best basketball team in the NBA this year?", stream=True)
DeepSeek-R1 using Groq
deepseek_r1_using_groq.py
from agno.agent import Agent
from agno.models.groq import Groq
# Setup your Agent using a reasoning model
agent = Agent(
model=Groq(
id="deepseek-r1-distill-llama-70b", temperature=0.6, max_tokens=1024, top_p=0.95
),
markdown=True,
)
# Run the Agent
agent.print_response("9.11 and 9.9 -- which is bigger?", stream=True)
Reasoning Model + Response Model
When you run the DeepSeek-R1 Agent above, you’ll notice that the response is not that great. This is because DeepSeek-R1 is great at solving problems but not that great at responding in a natural way (like claude sonnet or gpt-4.5).
To solve this problem, Agno supports using separate models for reasoning and response generation. This approach leverages a reasoning model for problem-solving while using a different model optimized for natural language responses, combining the strengths of both.
DeepSeek-R1 + Claude Sonnet
from agno.agent import Agent
from agno.models.anthropic import Claude
from agno.models.groq import Groq
# Setup your Agent using an extra reasoning model
deepseek_plus_claude = Agent(
model=Claude(id="claude-sonnet-4-5"),
reasoning_model=Groq(
id="deepseek-r1-distill-llama-70b", temperature=0.6, max_tokens=1024, top_p=0.95
),
)
# Run the Agent
deepseek_plus_claude.print_response("9.11 and 9.9 -- which is bigger?", stream=True)
Streaming Reasoning Content
When using a reasoning_model, you can stream the reasoning content as it’s being generated. This allows you to see the model’s thought process in real-time.
To enable streaming reasoning, set stream=True and stream_events=True when running the agent:
from agno.agent import Agent
from agno.models.anthropic import Claude
# Create an agent with a reasoning model
agent = Agent(
reasoning_model=Claude(
id="claude-sonnet-4-5",
thinking={"type": "enabled", "budget_tokens": 1024},
),
reasoning=True,
instructions="Think step by step about the problem.",
)
# Stream the response with reasoning events
agent.print_response(
"What is 25 * 37? Show your reasoning.",
stream=True,
stream_events=True,
)
Capturing Reasoning Events
You can also capture individual reasoning events. This gives you fine-grained control over how reasoning content is displayed:
capture_reasoning_events.py
from agno.agent import Agent
from agno.models.anthropic import Claude
from agno.run.agent import RunEvent
agent = Agent(
reasoning_model=Claude(
id="claude-sonnet-4-5",
thinking={"type": "enabled", "budget_tokens": 1024},
),
reasoning=True,
instructions="Think step by step about the problem.",
)
for run_output_event in agent.run(
"What is 25 * 37? Show your reasoning.",
stream=True,
stream_events=True,
):
if run_output_event.event == RunEvent.run_started:
print(f"EVENT: {run_output_event.event}")
elif run_output_event.event == RunEvent.reasoning_started:
print(f"EVENT: {run_output_event.event}")
print("Reasoning started...\n")
elif run_output_event.event == RunEvent.reasoning_content_delta:
# Stream reasoning content as it's being generated
print(run_output_event.reasoning_content, end="", flush=True)
elif run_output_event.event == RunEvent.run_content:
if run_output_event.content:
print(run_output_event.content, end="", flush=True)
elif run_output_event.event == RunEvent.run_completed:
print(f"EVENT: {run_output_event.event}")
The key events for streaming reasoning are:
| Event | Description |
|---|
RunEvent.reasoning_started | Emitted when reasoning begins |
RunEvent.reasoning_content_delta | Emitted for each chunk of reasoning content as it streams |
RunEvent.run_content | Emitted for the final response content |
Developer Resources