litellm --model gpt-4o --host 127.0.0.1 --port 4000
from agno.agent import Agent, RunResponse # noqa from agno.models.litellm import LiteLLMOpenAI agent = Agent(model=LiteLLMOpenAI(id="gpt-4o"), markdown=True) agent.print_response("Share a 2 sentence horror story", stream=True)
Create a virtual environment
Terminal
python3 -m venv .venv source .venv/bin/activate
Set your API key
export LITELLM_API_KEY=xxx
Install libraries
pip install -U litellm[proxy] openai agno
Run Agent
python cookbook/models/litellm/basic_stream.py
Was this page helpful?