Code

cookbook/models/meta/async_stream.py
import asyncio
from agno.agent import Agent
from agno.models.meta import Llama

async def main():
    agent = Agent(
        model=Llama(id="Llama-3.3-70B"),
        markdown=True,
    )
    
    await agent.aprint_response(
        "Share a two-sentence horror story.",
        stream=True
    )

asyncio.run(main())

Usage

1

Create a virtual environment

Open the Terminal and create a python virtual environment.

python3 -m venv .venv
source .venv/bin/activate
2

Set your LLAMA API key

export LLAMA_API_KEY=YOUR_API_KEY
3

Install libraries

pip install llama-api-client agno
4

Run Agent

python cookbook/models/meta/async_stream.py