Examples
- Examples
- Getting Started
- Agents
- Teams
- Workflows
- Applications
- Streamlit Apps
- Evals
Agent Concepts
- Reasoning
- Multimodal
- RAG
- Knowledge
- Memory
- Async
- Hybrid Search
- Storage
- Tools
- Vector Databases
- Context
- Embedders
- Agent State
- Observability
- Miscellaneous
Models
- Anthropic
- AWS Bedrock
- AWS Bedrock Claude
- Azure AI Foundry
- Azure OpenAI
- Cerebras
- Cerebras OpenAI
- Cohere
- DeepInfra
- DeepSeek
- Fireworks
- Gemini
- Groq
- Hugging Face
- IBM
- LM Studio
- LiteLLM
- LiteLLM OpenAI
- Meta
- Mistral
- NVIDIA
- Ollama
- OpenAI
- Perplexity
- Together
- XAI
Performance
Performance on Agent Instantiation with Tool
Example showing how to analyze the runtime and memory usage of an Agent that is using tools.
Code
"""Run `pip install agno openai memory_profiler` to install dependencies."""
from typing import Literal
from agno.agent import Agent
from agno.eval.performance import PerformanceEval
from agno.models.openai import OpenAIChat
def get_weather(city: Literal["nyc", "sf"]):
"""Use this to get weather information."""
if city == "nyc":
return "It might be cloudy in nyc"
elif city == "sf":
return "It's always sunny in sf"
else:
raise AssertionError("Unknown city")
tools = [get_weather]
def instantiate_agent():
return Agent(model=OpenAIChat(id="gpt-4o"), tools=tools)
instantiation_perf = PerformanceEval(func=instantiate_agent, num_iterations=1000)
if __name__ == "__main__":
instantiation_perf.run(print_results=True, print_summary=True)
Was this page helpful?
On this page
Assistant
Responses are generated using AI and may contain mistakes.