Reasoning models are a new class of large language models pre-trained to think before they answer. They produce a long internal chain of thought before responding. Examples of reasoning models include:
  • OpenAI o1-pro and gpt-5-mini
  • Claude 3.7 sonnet in extended-thinking mode
  • Gemini 2.0 flash thinking
  • DeepSeek-R1
Reasoning models deeply consider and think through a plan before taking action. Its all about what the model does before it starts generating a response. Reasoning models excel at single-shot use-cases. They’re perfect for solving hard problems (coding, math, physics) that don’t require multiple turns, or calling tools sequentially.

Examples

gpt-5-mini

o3_mini.py
from agno.agent import Agent
from agno.models.openai import OpenAIChat

# Setup your Agent using a reasoning model
agent = Agent(model=OpenAIChat(id="gpt-5-mini"))

# Run the Agent
agent.print_response(
    "Solve the trolley problem. Evaluate multiple ethical frameworks. Include an ASCII diagram of your solution.",
    stream=True,
    show_full_reasoning=True,
)

gpt-5-mini with tools

o3_mini_with_tools.py
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.tools.duckduckgo import DuckDuckGoTools

# Setup your Agent using a reasoning model
agent = Agent(
    model=OpenAIChat(id="gpt-5-mini"),
    tools=[DuckDuckGoTools()],
    markdown=True,
)

# Run the Agent
agent.print_response("What is the best basketball team in the NBA this year?", stream=True)

gpt-5-mini with reasoning effort

o3_mini_with_reasoning_effort.py
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.tools.duckduckgo import DuckDuckGoTools

# Setup your Agent using a reasoning model with high reasoning effort
agent = Agent(
    model=OpenAIChat(id="gpt-5-mini", reasoning_effort="high"),
    tools=[DuckDuckGoTools()],
    markdown=True,
)

# Run the Agent
agent.print_response("What is the best basketball team in the NBA this year?", stream=True)

DeepSeek-R1 using Groq

deepseek_r1_using_groq.py
from agno.agent import Agent
from agno.models.groq import Groq

# Setup your Agent using a reasoning model
agent = Agent(
    model=Groq(
        id="deepseek-r1-distill-llama-70b", temperature=0.6, max_tokens=1024, top_p=0.95
    ),
    markdown=True,
)

# Run the Agent
agent.print_response("9.11 and 9.9 -- which is bigger?", stream=True)

Reasoning Model + Response Model

When you run the DeepSeek-R1 Agent above, you’ll notice that the response is not that great. This is because DeepSeek-R1 is great at solving problems but not that great at responding in a natural way (like claude sonnet or gpt-4.5). What if we wanted to use a Reasoning Model to reason but a different model to generate the response? Great news! Agno allows you to use a Reasoning Model and a different Response Model together. By using a separate model for reasoning and a different model for responding, we can have the best of both worlds.

DeepSeek-R1 + Claude Sonnet

deepseek_plus_claude.py
from agno.agent import Agent
from agno.models.anthropic import Claude
from agno.models.groq import Groq

# Setup your Agent using an extra reasoning model
deepseek_plus_claude = Agent(
    model=Claude(id="claude-3-7-sonnet-20250219"),
    reasoning_model=Groq(
        id="deepseek-r1-distill-llama-70b", temperature=0.6, max_tokens=1024, top_p=0.95
    ),
)

# Run the Agent
deepseek_plus_claude.print_response("9.11 and 9.9 -- which is bigger?", stream=True)

Developer Resources