Use OpenAI models through Azure’s infrastructure. Learn more here.

Azure OpenAI provides access to OpenAI’s models like GPT-4o, o3-mini, and more.

Authentication

Navigate to Azure OpenAI on the Azure Portal and create a service. Then, using the Azure AI Studio portal, create a deployment and set your environment variables:

export AZURE_OPENAI_API_KEY=***
export AZURE_OPENAI_ENDPOINT=***  # Of the form https://<your-resource-name>.openai.azure.com/openai/deployments/<your-deployment-name>
# Optional:
# export AZURE_OPENAI_DEPLOYMENT=***

Example

Use AzureOpenAI with your Agent:

from agno.agent import Agent
from agno.models.azure import AzureOpenAI
from os import getenv

agent = Agent(
    model=AzureOpenAI(id="gpt-4o"),
    markdown=True
)

# Print the response on the terminal
agent.print_response("Share a 2 sentence horror story.")

Prompt caching

Prompt caching will happen automatically using our AzureOpenAI model. You can read more about how OpenAI handle caching in their docs.

Advanced Examples

View more examples here.

Parameters

ParameterTypeDefaultDescription
idstr-The specific model ID used for generating responses. This field is required.
namestr"AzureOpenAI"The name identifier for the agent.
providerstr"Azure"The provider of the model.
api_keyOptional[str]"None"The API key for authenticating requests to the Azure OpenAI service.
api_versionstr"2024-10-21"The version of the Azure OpenAI API to use.
azure_endpointOptional[str]"None"The endpoint URL for the Azure OpenAI service.
azure_deploymentOptional[str]"None"The deployment name or ID in Azure.
azure_ad_tokenOptional[str]"None"The Azure Active Directory token for authenticating requests.
azure_ad_token_providerOptional[Any]"None"The provider for obtaining Azure Active Directory tokens.
openai_clientOptional[AzureOpenAIClient]"None"An instance of AzureOpenAIClient provided for making API requests.

AzureOpenAI also supports the parameters of OpenAI.