Reasoning Agents go through an internal chain of thought before responding, working through different ideas, validating and correcting as needed.

ReAct: Reason and Act

At the core of effective reasoning lies the ReAct (Reason and Act) methodology - a paradigm where agents alternate between reasoning about a problem and taking actions (like calling tools) to gather information or execute tasks. This iterative process allows agents to break down complex problems into manageable steps, validate their assumptions through action, and adjust their approach based on real-world feedback. In Agno, ReAct principles are embedded throughout our reasoning implementations. Whether an agent is using reasoning models to think through a problem, or employing reasoning tools to structure their thought process, they follow this fundamental pattern of reasoning → acting → observing → reasoning again until reaching a solution. Agno supports 3 approaches to reasoning:
  1. Reasoning Models
  2. Reasoning Tools
  3. Reasoning Agents
Which approach works best will depend on your use case, we recommend trying them all and immersing yourself in this new era of Reasoning Agents!

Reasoning Models

Reasoning models are a separate class of large language models pre-trained to think before they answer. They produce an internal chain of thought before responding. Examples of reasoning models include OpenAI o-series, Claude 3.7 sonnet in extended-thinking mode, Gemini 2.0 flash thinking and DeepSeek-R1. Reasoning at the model layer is all about what the model does before it starts generating a final response. Reasoning models excel at single-shot use-cases. They’re perfect for solving hard problems (coding, math, physics) that don’t require multiple turns, or calling tools sequentially. You can try any supported Agno model and if that model has reasoning capabilities, it will be used to reason about the problem.

Example

o3_mini.py
from agno.agent import Agent
from agno.models.openai import OpenAIChat

# Setup your Agent using a reasoning model
agent = Agent(model=OpenAIChat(id="gpt-5-mini"))

# Run the Agent
agent.print_response(
    "Solve the trolley problem. Evaluate multiple ethical frameworks. Include an ASCII diagram of your solution.",
    stream=True,
    show_full_reasoning=True,
)
Read more about reasoning models in the Reasoning Models Guide.

Reasoning Model + Response Model

What if we wanted to use a Reasoning Model to reason but a different model to generate the response? It is well known that reasoning models are great at solving problems but not that great at responding in a natural way (like Claude Sonnet or GPT-4o). By using a model to generate the response, and a different one for reasoning, we can have the best of both worlds:

Example

Let’s use DeepSeek-R1 from Groq for reasoning and Claude Sonnet for a natural response.
deepseek_plus_claude.py
from agno.agent import Agent
from agno.models.anthropic import Claude
from agno.models.groq import Groq

# Setup your Agent using Claude as main model, and DeepSeek as reasoning model
claude_with_deepseek_reasoner = Agent(
    model=Claude(id="claude-3-7-sonnet-20250219"),
    reasoning_model=Groq(
        id="deepseek-r1-distill-llama-70b", temperature=0.6, max_tokens=1024, top_p=0.95
    ),
)

# Run the Agent
claude_with_deepseek_reasoner.print_response(
    "9.11 and 9.9 -- which is bigger?",
    stream=True,
    show_full_reasoning=True,
)

Reasoning Tools

By giving a model reasoning tools, we can greatly improve its reasoning capabilities by providing a dedicated space for structured thinking. This is a simple, yet effective approach to add reasoning to non-reasoning models. The research was first published by Anthropic in this blog post but has been practiced by many AI Engineers (including our own team) long before it was published.

Example

claude_reasoning_tools.py
from agno.agent import Agent
from agno.models.anthropic import Claude
from agno.tools.reasoning import ReasoningTools

# Setup our Agent with the reasoning tools
reasoning_agent = Agent(
    model=Claude(id="claude-3-7-sonnet-latest"),
    tools=[
        ReasoningTools(add_instructions=True),
    ],
    instructions="Use tables where possible",
    markdown=True,
)

# Run the Agent
reasoning_agent.print_response(
    "Write a report on NVDA. Only the report, no other text.",
    stream=True,
    show_full_reasoning=True,
    stream_intermediate_steps=True,
)
Read more about reasoning tools in the Reasoning Tools Guide.

Reasoning Agents

Reasoning Agents are a new type of multi-agent system developed by Agno that combines chain of thought reasoning with tool use. You can enable reasoning on any Agent by setting reasoning=True. When an Agent with reasoning=True is given a task, a separate “Reasoning Agent” first solves the problem using chain-of-thought. At each step, it calls tools to gather information, validate results, and iterate until it reaches a final answer. Once the Reasoning Agent has a final answer, it hands the results back to the original Agent to validate and provide a response.

Example

reasoning_agent.py
from agno.agent import Agent
from agno.models.openai import OpenAIChat

# Setup our Agent with reasoning enabled
reasoning_agent = Agent(
    model=OpenAIChat(id="gpt-5-mini"),
    reasoning=True,
    markdown=True,
)

# Run the Agent
reasoning_agent.print_response(
    "Solve the trolley problem. Evaluate multiple ethical frameworks. Include an ASCII diagram of your solution.",
    stream=True,
    show_full_reasoning=True,
)
Read more about reasoning agents in the Reasoning Agents Guide.