The problem: Regular models often rush to answers on complex problems, missing steps or making logical errors.
The solution: Enable reasoning=True and watch your model break down the problem, explore multiple approaches, validate results, and deliver thoroughly vetted solutions.
The beauty? It works with any model, from GPT-4o to Claude to local models via Ollama. You’re not limited to specialized reasoning models.
How It Works
Enable reasoning on any agent by setting reasoning=True:
from agno.agent import Agent
from agno.models.openai import OpenAIChat
reasoning_agent = Agent(
model=OpenAIChat(id="gpt-4o"), # Any model works
reasoning=True,
)
Behind the scenes, Agno creates a separate reasoning agent instance that uses your same model but with specialized prompting that guides it through a rigorous 6-step reasoning framework:
The Reasoning Framework
-
Problem Analysis
- Restate the task to ensure full comprehension
- Identify required information and necessary tools
-
Decompose and Strategize
- Break down the problem into subtasks
- Develop multiple distinct approaches
-
Intent Clarification and Planning
- Articulate the user’s intent
- Select the best strategy with clear justification
- Create a detailed action plan
-
Execute the Action Plan
- For each step: document title, action, result, reasoning, next action, and confidence score
- Call tools as needed to gather information
- Self-correct if errors are detected
-
Validation (Mandatory)
- Cross-verify with alternative approaches
- Use additional tools to confirm accuracy
- Reset and revise if validation fails
-
Final Answer
- Deliver the thoroughly validated solution
- Explain how it addresses the original task
The reasoning agent works through these steps iteratively (up to 10 by default), building on previous results, calling tools, and self-correcting until it reaches a confident solution. Once complete, it hands the full reasoning back to your main agent for the final response.
How It Differs by Model Type
With regular models (gpt-4o, Claude Sonnet, Gemini):
- Forces structured chain-of-thought through the 6-step framework
- Creates detailed reasoning steps with confidence scores
- This is where reasoning agents shine: transforming any model into a reasoning system
With native reasoning models (gpt-5-mini, DeepSeek-R1, o3-mini):
- Uses the model’s built-in reasoning capabilities
- Adds a validation pass from your main agent
- Useful for critical tasks but often unnecessary overhead for simpler problems
Basic Example
Let’s transform a regular GPT-4o model into a reasoning system:
from agno.agent import Agent
from agno.models.openai import OpenAIChat
# Transform a regular model into a reasoning system
reasoning_agent = Agent(
model=OpenAIChat(id="gpt-4o"),
reasoning=True,
markdown=True,
)
reasoning_agent.print_response(
"Solve the trolley problem. Evaluate multiple ethical frameworks. Include an ASCII diagram of your solution.",
stream=True,
show_full_reasoning=True, # Shows the complete reasoning process
)
What You’ll See
With show_full_reasoning=True, you’ll see:
- Each reasoning step with its title, action, and result
- The agent’s thought process including why it chose each approach
- Tool calls made during reasoning (if tools are provided)
- Validation checks performed to verify the solution
- Confidence scores for each step (0.0–1.0)
- Self-corrections if the agent detects errors
- The final polished response from your main agent
Here’s where reasoning agents truly excel: combining multi-step reasoning with tool use. The reasoning agent can call tools iteratively, analyze results, and build toward a comprehensive solution.
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.tools.duckduckgo import DuckDuckGoTools
reasoning_agent = Agent(
model=OpenAIChat(id="gpt-4o"),
tools=[DuckDuckGoTools()],
instructions=["Use tables to display data"],
reasoning=True,
markdown=True,
)
reasoning_agent.print_response(
"Compare the market performance of NVDA, AMD, and INTC over the past quarter. What are the key drivers?",
stream=True,
show_full_reasoning=True,
)
The reasoning agent will:
- Break down the task (need stock data for 3 companies)
- Use DuckDuckGo to search for current market data
- Analyze each company’s performance
- Search for news about key drivers
- Validate findings across multiple sources
- Create a comprehensive comparison with tables
- Provide a final answer with clear insights
Configuration Options
Display Options
Want to peek under the hood? Control what you see during reasoning:
agent.print_response(
"Your question",
show_full_reasoning=True, # Display complete reasoning process (default: False)
)
Capturing Reasoning Events
For building custom UIs or programmatically tracking reasoning progress, you can capture reasoning events (ReasoningStarted, ReasoningStep, ReasoningCompleted) as they happen during streaming. See the Reasoning Reference for event attributes and complete code examples.
Iteration Control
Adjust how many reasoning steps the agent takes:
reasoning_agent = Agent(
model=OpenAIChat(id="gpt-4o"),
reasoning=True,
reasoning_min_steps=2, # Minimum reasoning steps (default: 1)
reasoning_max_steps=15, # Maximum reasoning steps (default: 10)
)
reasoning_min_steps: Ensures the agent thinks through at least this many steps before answering
reasoning_max_steps: Prevents infinite loops by capping the iteration count
Custom Reasoning Agent
For advanced use cases, you can provide your own reasoning agent:
from agno.agent import Agent
from agno.models.openai import OpenAIChat
# Create a custom reasoning agent with specific instructions
custom_reasoning_agent = Agent(
model=OpenAIChat(id="gpt-4o"),
instructions=[
"Focus heavily on mathematical rigor",
"Always provide step-by-step proofs",
],
)
main_agent = Agent(
model=OpenAIChat(id="gpt-4o"),
reasoning=True,
reasoning_agent=custom_reasoning_agent, # Use your custom agent
)
Example Use Cases
Logical Puzzles
Mathematical Proofs
Scientific Research
Planning & Itineraries
Creative Writing
Breaking down complex logic problems:from agno.agent import Agent
from agno.models.openai import OpenAIChat
task = (
"Three missionaries and three cannibals need to cross a river. "
"They have a boat that can carry up to two people at a time. "
"If, at any time, the cannibals outnumber the missionaries on either side of the river, the cannibals will eat the missionaries. "
"How can all six people get across the river safely? Provide a step-by-step solution and show the solution as an ASCII diagram."
)
reasoning_agent = Agent(
model=OpenAIChat(id="gpt-4o"),
reasoning=True,
markdown=True,
)
reasoning_agent.print_response(task, stream=True, show_full_reasoning=True)
Problems requiring rigorous validation:from agno.agent import Agent
from agno.models.openai import OpenAIChat
task = "Prove that for any positive integer n, the sum of the first n odd numbers is equal to n squared. Provide a detailed proof."
reasoning_agent = Agent(
model=OpenAIChat(id="gpt-4o"),
reasoning=True,
markdown=True,
)
reasoning_agent.print_response(task, stream=True, show_full_reasoning=True)
Critical evaluation and multi-faceted analysis:from agno.agent import Agent
from agno.models.openai import OpenAIChat
task = (
"Read the following abstract of a scientific paper and provide a critical evaluation of its methodology, "
"results, conclusions, and any potential biases or flaws:\n\n"
"Abstract: This study examines the effect of a new teaching method on student performance in mathematics. "
"A sample of 30 students was selected from a single school and taught using the new method over one semester. "
"The results showed a 15% increase in test scores compared to the previous semester. "
"The study concludes that the new teaching method is effective in improving mathematical performance among high school students."
)
reasoning_agent = Agent(
model=OpenAIChat(id="gpt-4o"),
reasoning=True,
markdown=True,
)
reasoning_agent.print_response(task, stream=True, show_full_reasoning=True)
Sequential planning and optimization:from agno.agent import Agent
from agno.models.openai import OpenAIChat
task = "Plan a 3-day itinerary from Los Angeles to Las Vegas, including must-see attractions, dining recommendations, and optimal travel times."
reasoning_agent = Agent(
model=OpenAIChat(id="gpt-4o"),
reasoning=True,
markdown=True,
)
reasoning_agent.print_response(task, stream=True, show_full_reasoning=True)
Structured and coherent creative content:from agno.agent import Agent
from agno.models.openai import OpenAIChat
task = "Write a short story about life in 500,000 years. Consider technological, biological, and societal evolution."
reasoning_agent = Agent(
model=OpenAIChat(id="gpt-4o"),
reasoning=True,
markdown=True,
)
reasoning_agent.print_response(task, stream=True, show_full_reasoning=True)
When to Use Reasoning Agents
Use reasoning agents when:
- Your task requires multiple sequential steps
- You need the agent to call tools iteratively and build on results
- You want automated chain-of-thought without manually calling reasoning tools
- You need self-validation and error correction
- The problem benefits from exploring multiple approaches before settling on a solution
Consider alternatives when:
- You’re using a native reasoning model (gpt-5-mini, DeepSeek-R1) for simple tasks: just use the model directly
- You want explicit control over when the agent thinks vs. acts: use Reasoning Tools instead
- The task is straightforward and doesn’t require multi-step thinking
Pro tip: Start with reasoning_max_steps=5 for simpler problems to avoid
unnecessary overhead. Increase to 10-15 for complex multi-step tasks. Monitor
with show_full_reasoning=True to see how many steps your agent actually
needs.
Developer Resources