Make sure to start the proxy server:

litellm --model gpt-4o --host 127.0.0.1 --port 4000

Code

cookbook/models/litellm_openai/basic_stream.py
from agno.agent import Agent, RunResponse  # noqa
from agno.models.litellm import LiteLLMOpenAI

agent = Agent(model=LiteLLMOpenAI(id="gpt-4o"), markdown=True)

agent.print_response("Share a 2 sentence horror story", stream=True)

Usage

1

Create a virtual environment

Open the Terminal and create a python virtual environment.

python3 -m venv .venv
source .venv/bin/activate
2

Set your API key

export LITELLM_API_KEY=xxx
3

Install libraries

pip install -U litellm[proxy] openai agno
4

Run Agent

python cookbook/models/litellm/basic_stream.py