Tools are functions your Agno Agents can use to get things done.
Tools are what make Agents and Teams capable of real-world action. While using LLMs directly you can only generate text responses, Agents and Teams equipped with tools can interact with external systems and perform practical actions.Some examples of actions that can be performed with tools are: searching the web, running SQL, sending an email or calling APIs.Agno comes with 120+ pre-built toolkits, which you can use to give your Agents all kind of abilities. You can also write your own tools, to give your Agents even more capabilities. The general syntax is:
Copy
Ask AI
import randomfrom agno.agent import Agentfrom agno.models.openai import OpenAIChatfrom agno.tools import tooldef get_weather(city: str) -> str: """Get the weather for the given city. Args: city (str): The city to get the weather for. """ # In a real implementation, this would call a weather API weather_conditions = ["sunny", "cloudy", "rainy", "snowy", "windy"] random_weather = random.choice(weather_conditions) return f"The weather in {city} is {random_weather}."# To equipt our Agent with our tool, we simply pass it with the tools parameteragent = Agent( model=OpenAIChat(id="gpt-5-nano"), tools=[get_weather], markdown=True,)# Our Agent will now be able to use our tool, when it deems it relevantagent.print_response("What is the weather in San Francisco?", stream=True)
In the example above, the get_weather function is a tool. When called, the tool result is shown in the output.
Agno automatically converts your tool functions into the required tool definition format for the model. Typically this is a JSON schema that describes the parameters and return type of the tool.For example:
Copy
Ask AI
def get_weather(city: str) -> str: """ Get the weather for a given city. Args: city (str): The city to get the weather for. """ return f"The weather in {city} is sunny."
This will be converted into the following tool definition:
Copy
Ask AI
{ "type": "function", "function": { "name": "get_weather", "description": "Get the weather for a given city.", "parameters": { "type": "object", "properties": { "city": { "type": "string", "description": "The city to get the weather for." } }, "required": ["city"] } }}
This tool definition is then sent to the model so that it knows how to call the tool when it is requested.
You’ll notice as well that the Args section is automatically stripped from the definition, parsed and used to populate the definitions of individualproperties.When using a Pydantic model in an argument of a tool function, Agno will automatically convert the model into the required tool definition format.For example:
Copy
Ask AI
from pydantic import BaseModel, Fieldclass GetWeatherRequest(BaseModel): city: str = Field(description="The city to get the weather for")def get_weather(request: GetWeatherRequest) -> str: """ Get the weather for a given city. Args: request (GetWeatherRequest): The request object containing the city to get the weather for. """ return f"The weather in {request.city} is sunny."
This will be converted into the following tool definition:
Copy
Ask AI
{ "type": "function", "function": { "name": "get_weather", "description": "Get the weather for a given city.", "parameters": { "type": "object", "properties": { "request": { "type": "object", "properties": { "city": { "type": "string", "description": "The city to get the weather for." } }, "required": ["city"] } }, "required": ["request"] } }}
Always create a docstring for your tool functions. Make sure to include the Args section and cover each of the arguments of the function.
Use sensible names for your tool functions. Remember this is directly used by the model to call the tool when it is requested.
When the model requests a tool call, the tool is executed and the result is returned to the model.
A model can request multiple tool calls in a single response.
When using arun to execute the agent or team, and the model requested multiple tool calls, the tools will be executed concurrently.
Agno Agents can execute multiple tools concurrently, allowing you to process function calls that the model makes efficiently. This is especially valuable when the functions involve time-consuming operations. It improves responsiveness and reduces overall execution time.
When you call arun or aprint_response, your tools will execute concurrently. If you provide synchronous functions as tools, they will execute concurrently on separate threads.
Concurrent execution of tools requires a model that supports parallel function
calling. For example, OpenAI models have a parallel_tool_calls parameter
(enabled by default) that allows multiple tool calls to be requested and
executed simultaneously.
Async Execution Example
async_tools.py
Copy
Ask AI
import asyncioimport timefrom agno.agent import Agentfrom agno.models.openai import OpenAIChatfrom agno.utils.log import loggerasync def atask1(delay: int): """Simulate a task that takes a random amount of time to complete Args: delay (int): The amount of time to delay the task """ logger.info("Task 1 has started") for _ in range(delay): await asyncio.sleep(1) logger.info("Task 1 has slept for 1s") logger.info("Task 1 has completed") return f"Task 1 completed in {delay:.2f}s"async def atask2(delay: int): """Simulate a task that takes a random amount of time to complete Args: delay (int): The amount of time to delay the task """ logger.info("Task 2 has started") for _ in range(delay): await asyncio.sleep(1) logger.info("Task 2 has slept for 1s") logger.info("Task 2 has completed") return f"Task 2 completed in {delay:.2f}s"async def atask3(delay: int): """Simulate a task that takes a random amount of time to complete Args: delay (int): The amount of time to delay the task """ logger.info("Task 3 has started") for _ in range(delay): await asyncio.sleep(1) logger.info("Task 3 has slept for 1s") logger.info("Task 3 has completed") return f"Task 3 completed in {delay:.2f}s"async_agent = Agent( model=OpenAIChat(id="gpt-5-mini"), tools=[atask2, atask1, atask3], markdown=True,)asyncio.run( async_agent.aprint_response("Please run all tasks with a delay of 3s", stream=True))
In this example, gpt-5-mini makes three simultaneous tool calls to atask1, atask2 and atask3. Normally these tool calls would execute sequentially, but using the aprint_response function, they run concurrently, improving execution time.
Agno automatically provides special parameters to your tools that give access to the agent’s parameters, state and other variables. These parameters are injected automatically - the agent doesn’t need to know about them.
You can access values from the current run via the run_context parameter: run_context.session_state, run_context.dependencies, run_context.knowledge_filters, run_context.metadata. See the RunContext schema for more information.This allows tools to access and modify persistent data across conversations.This is useful in cases where a tool result is relevant for the next steps of the conversation.Add run_context as a parameter in your tool function to access the agent’s persistent state:
Copy
Ask AI
from agno.agent import Agentfrom agno.db.sqlite import SqliteDbfrom agno.models.openai import OpenAIChatfrom agno.run import RunContextdef add_item(run_context: RunContext, item: str) -> str: """Add an item to the shopping list.""" if not run_context.session_state: run_context.session_state = {} run_context.session_state["shopping_list"].append(item) # type: ignore return f"The shopping list is now {run_context.session_state['shopping_list']}" # type: ignore# Create an Agent that maintains stateagent = Agent( model=OpenAIChat(id="gpt-4o-mini"), # Initialize the session state with a counter starting at 0 (this is the default session state for all users) session_state={"shopping_list": []}, db=SqliteDb(db_file="tmp/agents.db"), tools=[add_item], # You can use variables from the session state in the instructions instructions="Current state (shopping list) is: {shopping_list}", markdown=True,)# Example usageagent.print_response("Add milk, eggs, and bread to the shopping list", stream=True)print(f"Final session state: {agent.get_session_state()}")
The built-in parameter images, videos, audio, and files allows tools to access and modify the input media to an agent.
Using the send_media_to_model parameter, you can control whether the media is sent to the model or not and using store_media parameter, you can control whether the
media is stored in the RunOutput or not.