Skip to main content
Requesty AI is an LLM gateway with AI governance, providing unified access to various language models with built-in governance and monitoring capabilities. Learn more about Requesty’s features at requesty.ai.

Authentication

Set your REQUESTY_API_KEY environment variable. Get your key from Requesty.
export REQUESTY_API_KEY=***

Example

Use Requesty with your Agent:
from agno.agent import Agent
from agno.models.requesty import Requesty

agent = Agent(model=Requesty(id="openai/gpt-4o"), markdown=True)

# Print the response in the terminal
agent.print_response("Share a 2 sentence horror story")

View more examples here.

Params

ParameterTypeDefaultDescription
idstr"openai/gpt-4.1"The id of the model to use through Requesty
namestr"Requesty"The name of the model
providerstr"Requesty"The provider of the model
api_keyOptional[str]NoneThe API key for Requesty (defaults to REQUESTY_API_KEY env var)
base_urlstr"https://router.requesty.ai/v1"The base URL for the Requesty API
max_tokensint1024The maximum number of tokens to generate
Requesty also supports the params of OpenAI.