Reasoning Agents
Reasoning Agents are a new type of multi-agent system developed by Agno that combines chain of thought reasoning with tool use.
You can enable reasoning on any Agent by setting reasoning=True
.
When an Agent with reasoning=True
is given a task, a separate “Reasoning Agent” first solves the problem using chain-of-thought. At each step, it calls tools to gather information, validate results, and iterate until it reaches a final answer. Once the Reasoning Agent has a final answer, it hands the results back to the original Agent to validate and provide a response.
Example
Enabling Agentic Reasoning
To enable Agentic Reasoning, set reasoning=True
or set the reasoning_model
to a model that supports structured outputs. If you do not set reasoning_model
, the primary Agent
model will be used for reasoning.
Reasoning Model Requirements
The reasoning_model
must be able to handle structured outputs, this includes models like gpt-4o and claude-3-7-sonnet that support structured outputs natively or gemini models that support structured outputs using JSON mode.
Using a Reasoning Model that supports native Reasoning
If you set reasoning_model
to a model that supports native Reasoning like o3-mini or deepseek-r1, the reasoning model will be used to reason and the primary Agent
model will be used to respond. See Reasoning Models + Response Models for more information.
Reasoning with tools
You can also use tools with a reasoning agent. Lets create a finance agent that can reason.
More Examples
Logical puzzles
Mathematical proofs
Scientific research
Ethical dilemma
Planning an itinerary
Creative writing
Developer Resources
You can find more examples in the Reasoning Agents Cookbook.