Proxy Server Integration

LiteLLM can also be used as an OpenAI-compatible proxy server, allowing you to route requests to different models through a unified API.

Starting the Proxy Server

First, install LiteLLM with proxy support:

pip install 'litellm[proxy]'

Start the proxy server:

litellm --model gpt-4o --host 127.0.0.1 --port 4000

Using the Proxy

The LiteLLMOpenAI class connects to the LiteLLM proxy using an OpenAI-compatible interface:

from agno.agent import Agent
from agno.models.litellm import LiteLLMOpenAI

agent = Agent(
    model=LiteLLMOpenAI(
        id="gpt-4o",  # Model ID to use
    ),
    markdown=True,
)

agent.print_response("Share a 2 sentence horror story")

Configuration Options

The LiteLLMOpenAI class accepts the following parameters:

ParameterTypeDescriptionDefault
idstrModel identifier”gpt-4o”
namestrDisplay name for the model”LiteLLM”
providerstrProvider name”LiteLLM”
api_keystrAPI key (falls back to LITELLM_API_KEY environment variable)None
base_urlstrURL of the LiteLLM proxy serverhttp://0.0.0.0:4000

Examples

Check out these examples in the cookbook:

Proxy Examples

View more examples here.