reasoning=True and watch your model break down the problem, explore multiple approaches, validate results, and deliver thoroughly vetted solutions.
The beauty? It works with any model, from GPT-4o to Claude to local models via Ollama. You’re not limited to specialized reasoning models.
How It Works
Enable reasoning on any agent by settingreasoning=True:
The Reasoning Framework
-
Problem Analysis
- Restate the task to ensure full comprehension
- Identify required information and necessary tools
-
Decompose and Strategize
- Break down the problem into subtasks
- Develop multiple distinct approaches
-
Intent Clarification and Planning
- Articulate the user’s intent
- Select the best strategy with clear justification
- Create a detailed action plan
-
Execute the Action Plan
- For each step: document title, action, result, reasoning, next action, and confidence score
- Call tools as needed to gather information
- Self-correct if errors are detected
-
Validation (Mandatory)
- Cross-verify with alternative approaches
- Use additional tools to confirm accuracy
- Reset and revise if validation fails
-
Final Answer
- Deliver the thoroughly validated solution
- Explain how it addresses the original task
How It Differs by Model Type
With regular models (gpt-4o, Claude Sonnet, Gemini):- Forces structured chain-of-thought through the 6-step framework
- Creates detailed reasoning steps with confidence scores
- This is where reasoning agents shine: transforming any model into a reasoning system
- Uses the model’s built-in reasoning capabilities
- Adds a validation pass from your main agent
- Useful for critical tasks but often unnecessary overhead for simpler problems
Basic Example
Let’s transform a regular GPT-4o model into a reasoning system:reasoning_agent.py
What You’ll See
Withshow_full_reasoning=True, you’ll see:
- Each reasoning step with its title, action, and result
- The agent’s thought process including why it chose each approach
- Tool calls made during reasoning (if tools are provided)
- Validation checks performed to verify the solution
- Confidence scores for each step (0.0–1.0)
- Self-corrections if the agent detects errors
- The final polished response from your main agent
Reasoning with Tools
Here’s where reasoning agents truly excel: combining multi-step reasoning with tool use. The reasoning agent can call tools iteratively, analyze results, and build toward a comprehensive solution.finance_reasoning.py
- Break down the task (need stock data for 3 companies)
- Use DuckDuckGo to search for current market data
- Analyze each company’s performance
- Search for news about key drivers
- Validate findings across multiple sources
- Create a comprehensive comparison with tables
- Provide a final answer with clear insights
Configuration Options
Display Options
Want to peek under the hood? Control what you see during reasoning:Capturing Reasoning Events
For building custom UIs or programmatically tracking reasoning progress, you can capture reasoning events (ReasoningStarted, ReasoningStep, ReasoningCompleted) as they happen during streaming. See the Reasoning Reference for event attributes and complete code examples.
Iteration Control
Adjust how many reasoning steps the agent takes:reasoning_min_steps: Ensures the agent thinks through at least this many steps before answeringreasoning_max_steps: Prevents infinite loops by capping the iteration count
Custom Reasoning Agent
For advanced use cases, you can provide your own reasoning agent:Example Use Cases
- Logical Puzzles
- Mathematical Proofs
- Scientific Research
- Planning & Itineraries
- Creative Writing
Breaking down complex logic problems:
logical_puzzle.py
When to Use Reasoning Agents
Use reasoning agents when:- Your task requires multiple sequential steps
- You need the agent to call tools iteratively and build on results
- You want automated chain-of-thought without manually calling reasoning tools
- You need self-validation and error correction
- The problem benefits from exploring multiple approaches before settling on a solution
- You’re using a native reasoning model (gpt-5-mini, DeepSeek-R1) for simple tasks: just use the model directly
- You want explicit control over when the agent thinks vs. acts: use Reasoning Tools instead
- The task is straightforward and doesn’t require multi-step thinking