Skip to main content
Examples of reasoning models include:
  • OpenAI o1-pro and gpt-5-mini
  • Claude 3.7 sonnet in extended-thinking mode
  • Gemini 2.0 flash thinking
  • DeepSeek-R1
Reasoning models deeply consider and think through a plan before taking action. Its all about what the model does before it starts generating a response. Reasoning models excel at single-shot use-cases. They’re perfect for solving hard problems (coding, math, physics) that don’t require multiple turns, or calling tools sequentially.

Examples

gpt-5-mini

gpt_5_mini.py
from agno.agent import Agent
from agno.models.openai import OpenAIChat

# Setup your Agent using a reasoning model
agent = Agent(model=OpenAIChat(id="gpt-5-mini"))

# Run the Agent
agent.print_response(
    "Solve the trolley problem. Evaluate multiple ethical frameworks. Include an ASCII diagram of your solution.",
    stream=True,
    show_full_reasoning=True,
)

gpt-5-mini with tools

gpt_5_mini_with_tools.py
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.tools.duckduckgo import DuckDuckGoTools

# Setup your Agent using a reasoning model
agent = Agent(
    model=OpenAIChat(id="gpt-5-mini"),
    tools=[DuckDuckGoTools()],
    markdown=True,
)

# Run the Agent
agent.print_response("What is the best basketball team in the NBA this year?", stream=True)

gpt-5-mini with reasoning effort

gpt_5_mini_with_reasoning_effort.py
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.tools.duckduckgo import DuckDuckGoTools

# Setup your Agent using a reasoning model with high reasoning effort
agent = Agent(
    model=OpenAIChat(id="gpt-5-mini", reasoning_effort="high"),
    tools=[DuckDuckGoTools()],
    markdown=True,
)

# Run the Agent
agent.print_response("What is the best basketball team in the NBA this year?", stream=True)

DeepSeek-R1 using Groq

deepseek_r1_using_groq.py
from agno.agent import Agent
from agno.models.groq import Groq

# Setup your Agent using a reasoning model
agent = Agent(
    model=Groq(
        id="deepseek-r1-distill-llama-70b", temperature=0.6, max_tokens=1024, top_p=0.95
    ),
    markdown=True,
)

# Run the Agent
agent.print_response("9.11 and 9.9 -- which is bigger?", stream=True)

Reasoning Model + Response Model

When you run the DeepSeek-R1 Agent above, you’ll notice that the response is not that great. This is because DeepSeek-R1 is great at solving problems but not that great at responding in a natural way (like claude sonnet or gpt-4.5). To solve this problem, Agno supports using separate models for reasoning and response generation. This approach leverages a reasoning model for problem-solving while using a different model optimized for natural language responses, combining the strengths of both.

DeepSeek-R1 + Claude Sonnet

deepseek_plus_claude.py
from agno.agent import Agent
from agno.models.anthropic import Claude
from agno.models.groq import Groq

# Setup your Agent using an extra reasoning model
deepseek_plus_claude = Agent(
    model=Claude(id="claude-3-7-sonnet-20250219"),
    reasoning_model=Groq(
        id="deepseek-r1-distill-llama-70b", temperature=0.6, max_tokens=1024, top_p=0.95
    ),
)

# Run the Agent
deepseek_plus_claude.print_response("9.11 and 9.9 -- which is bigger?", stream=True)

Streaming Reasoning Content

When using a reasoning_model, you can stream the reasoning content as it’s being generated. This allows you to see the model’s thought process in real-time. To enable streaming reasoning, set stream=True and stream_events=True when running the agent:
streaming_reasoning.py
from agno.agent import Agent
from agno.models.anthropic import Claude

# Create an agent with a reasoning model
agent = Agent(
    reasoning_model=Claude(
        id="claude-sonnet-4-5",
        thinking={"type": "enabled", "budget_tokens": 1024},
    ),
    reasoning=True,
    instructions="Think step by step about the problem.",
)

# Stream the response with reasoning events
agent.print_response(
    "What is 25 * 37? Show your reasoning.",
    stream=True,
    stream_events=True,
)

Capturing Reasoning Events

You can also capture individual reasoning events. This gives you fine-grained control over how reasoning content is displayed:
capture_reasoning_events.py
from agno.agent import Agent
from agno.models.anthropic import Claude
from agno.run.agent import RunEvent

agent = Agent(
    reasoning_model=Claude(
        id="claude-sonnet-4-5",
        thinking={"type": "enabled", "budget_tokens": 1024},
    ),
    reasoning=True,
    instructions="Think step by step about the problem.",
)

for run_output_event in agent.run(
    "What is 25 * 37? Show your reasoning.",
    stream=True,
    stream_events=True,
):
    if run_output_event.event == RunEvent.run_started:
        print(f"EVENT: {run_output_event.event}")
    elif run_output_event.event == RunEvent.reasoning_started:
        print(f"EVENT: {run_output_event.event}")
        print("Reasoning started...\n")
    elif run_output_event.event == RunEvent.reasoning_content_delta:
        # Stream reasoning content as it's being generated
        print(run_output_event.reasoning_content, end="", flush=True)
    elif run_output_event.event == RunEvent.run_content:
        if run_output_event.content:
            print(run_output_event.content, end="", flush=True)
    elif run_output_event.event == RunEvent.run_completed:
        print(f"EVENT: {run_output_event.event}")
The key events for streaming reasoning are:
EventDescription
RunEvent.reasoning_startedEmitted when reasoning begins
RunEvent.reasoning_content_deltaEmitted for each chunk of reasoning content as it streams
RunEvent.run_contentEmitted for the final response content

Developer Resources