Code

cookbook/models/ibm/watsonx/async_basic_stream.py
import asyncio

from agno.agent import Agent, RunResponse
from agno.models.ibm import WatsonX

agent = Agent(
    model=WatsonX(id="ibm/granite-20b-code-instruct"), debug_mode=True, markdown=True
)

# Get the response in a variable
# run_response: Iterator[RunResponse] = agent.run("Share a 2 sentence horror story", stream=True)
# for chunk in run_response:
#     print(chunk.content)

# Print the response in the terminal
asyncio.run(agent.aprint_response("Share a 2 sentence horror story", stream=True))

Usage

1

Create a virtual environment

Open the Terminal and create a python virtual environment.

python3 -m venv .venv
source .venv/bin/activate
2

Set your API key

export IBM_WATSONX_API_KEY=xxx
export IBM_WATSONX_PROJECT_ID=xxx
3

Install libraries

pip install -U ibm-watsonx-ai agno
4

Run Agent

python cookbook/models/ibm/watsonx/async_basic_stream.py

This example combines asynchronous execution with streaming. It creates an agent with debug_mode=True for additional logging and uses the asynchronous API with streaming to get and display responses as they’re generated.