ReAct: Reason and Act
At the core of effective reasoning lies the ReAct (Reason and Act) methodology - a paradigm where agents alternate between reasoning about a problem and taking actions (like calling tools) to gather information or execute tasks. This iterative process allows agents to break down complex problems into manageable steps, validate their assumptions through action, and adjust their approach based on real-world feedback. In Agno, ReAct principles are embedded throughout our reasoning implementations. Whether an agent is using reasoning models to think through a problem, or employing reasoning tools to structure their thought process, they follow this fundamental pattern of reasoning → acting → observing → reasoning again until reaching a solution. Agno supports 3 approaches to reasoning: Which approach works best will depend on your use case, we recommend trying them all and immersing yourself in this new era of Reasoning Agents!Reasoning Models
Reasoning models are a separate class of large language models pre-trained to think before they answer. They produce an internal chain of thought before responding. Examples of reasoning models include OpenAI o-series, Claude 3.7 sonnet in extended-thinking mode, Gemini 2.0 flash thinking and DeepSeek-R1. Reasoning at the model layer is all about what the model does before it starts generating a final response. Reasoning models excel at single-shot use-cases. They’re perfect for solving hard problems (coding, math, physics) that don’t require multiple turns, or calling tools sequentially. You can try any supported Agno model and if that model has reasoning capabilities, it will be used to reason about the problem.Example
o3_mini.py
Reasoning Model + Response Model
What if we wanted to use a Reasoning Model to reason but a different model to generate the response? It is well known that reasoning models are great at solving problems but not that great at responding in a natural way (like Claude Sonnet or GPT-4o). By using a model to generate the response, and a different one for reasoning, we can have the best of both worlds:Example
Let’s use DeepSeek-R1 from Groq for reasoning and Claude Sonnet for a natural response.deepseek_plus_claude.py
Reasoning Tools
By giving a model reasoning tools, we can greatly improve its reasoning capabilities by providing a dedicated space for structured thinking. This is a simple, yet effective approach to add reasoning to non-reasoning models. The research was first published by Anthropic in this blog post but has been practiced by many AI Engineers (including our own team) long before it was published.Example
claude_reasoning_tools.py
Reasoning Agents
Reasoning Agents are a new type of multi-agent system developed by Agno that combines chain of thought reasoning with tool use. You can enable reasoning on any Agent by settingreasoning=True
.
When an Agent with reasoning=True
is given a task, a separate “Reasoning Agent” first solves the problem using chain-of-thought. At each step, it calls tools to gather information, validate results, and iterate until it reaches a final answer. Once the Reasoning Agent has a final answer, it hands the results back to the original Agent to validate and provide a response.
Example
reasoning_agent.py