Examples
- Examples
- Getting Started
- Agents
- Teams
- Workflows
- Applications
- Streamlit Apps
- Evals
- Accuracy
- Performance
- Reliability
Agent Concepts
- Reasoning
- Multimodal
- RAG
- User Control Flows
- Knowledge
- Memory
- Async
- Hybrid Search
- Storage
- Tools
- Vector Databases
- Context
- Embedders
- Agent State
- Observability
- Miscellaneous
Models
- Anthropic
- AWS Bedrock
- AWS Bedrock Claude
- Azure AI Foundry
- Azure OpenAI
- Cerebras
- Cerebras OpenAI
- Cohere
- DeepInfra
- DeepSeek
- Fireworks
- Gemini
- Groq
- Hugging Face
- IBM
- LM Studio
- LiteLLM
- LiteLLM OpenAI
- Meta
- Mistral
- NVIDIA
- Ollama
- OpenAI
- Perplexity
- Together
- XAI
- Vercel
Accuracy
Simple Accuracy
Learn to check how complete, correct and accurate an Agno Agent’s response is.
This example shows a more complex evaluation that compares the full output of the agent for correctness.
Code
Copy
Ask AI
from typing import Optional
from agno.agent import Agent
from agno.eval.accuracy import AccuracyEval, AccuracyResult
from agno.models.openai import OpenAIChat
from agno.tools.calculator import CalculatorTools
evaluation = AccuracyEval(
model=OpenAIChat(id="o4-mini"),
agent=Agent(
model=OpenAIChat(id="gpt-4o"),
tools=[CalculatorTools(enable_all=True)],
),
input="What is 10*5 then to the power of 2? do it step by step",
expected_output="2500",
additional_guidelines="Agent output should include the steps and the final answer.",
)
result: Optional[AccuracyResult] = evaluation.run(print_results=True)
assert result is not None and result.avg_score >= 8
Was this page helpful?
On this page
Assistant
Responses are generated using AI and may contain mistakes.