Code
cookbook/11_models/meta/llama/basic_stream.py
Usage
1
Set up your virtual environment
2
Set your LLAMA API key
3
Install dependencies
4
Run Agent
from typing import Iterator # noqa
from agno.agent import Agent, RunOutput # noqa
from agno.models.meta import Llama
agent = Agent(model=Llama(id="Llama-4-Maverick-17B-128E-Instruct-FP8"), markdown=True)
# Get the response in a variable
# run_response: Iterator[RunOutputEvent] = agent.run("Share a 2 sentence horror story", stream=True)
# for chunk in run_response:
# print(chunk.content)
# Print the response in the terminal
agent.print_response("Share a 2 sentence horror story", stream=True)
Set up your virtual environment
uv venv --python 3.12
source .venv/bin/activate
Set your LLAMA API key
export LLAMA_API_KEY=YOUR_API_KEY
Install dependencies
uv pip install llama-api-client agno
Run Agent
python cookbook/11_models/meta/llama/basic_stream.py
Was this page helpful?