Skip to main content
This example demonstrates using a custom evaluator agent with specific instructions for evaluation.
1

Add the following code to your Python file

agent_as_judge_custom_evaluator.py
from agno.agent import Agent
from agno.eval.agent_as_judge import AgentAsJudgeEval
from agno.models.openai import OpenAIResponses

agent = Agent(
    model=OpenAIResponses(id="gpt-5.2"),
    instructions="Explain technical concepts simply.",
)

response = agent.run("What is machine learning?")

# Create a custom evaluator with specific instructions
custom_evaluator = Agent(
    model=OpenAIResponses(id="gpt-5.2"),
    description="Strict technical evaluator",
    instructions="You are a strict evaluator. Only give high scores to exceptionally clear and accurate explanations.",
)

evaluation = AgentAsJudgeEval(
    name="Technical Accuracy",
    criteria="Explanation must be technically accurate and comprehensive",
    scoring_strategy="numeric",
    threshold=8,
    evaluator_agent=custom_evaluator,
)

result = evaluation.run(
    input="What is machine learning?",
    output=str(response.content),
)

print(f"Score: {result.results[0].score}/10")
print(f"Passed: {result.results[0].passed}")

2

Set up your virtual environment

uv venv --python 3.12
source .venv/bin/activate
3

Install dependencies

uv pip install -U agno openai
4

Export your OpenAI API key

  export OPENAI_API_KEY="your_openai_api_key_here"
5

Run the example

python agent_as_judge_custom_evaluator.py