Code
cookbook/models/ibm/watsonx/basic_stream.py
from typing import Iterator
from agno.agent import Agent, RunResponse
from agno.models.ibm import WatsonX
agent = Agent(model=WatsonX(id="ibm/granite-20b-code-instruct"), markdown=True)
# Get the response in a variable
# run_response: Iterator[RunResponse] = agent.run("Share a 2 sentence horror story", stream=True)
# for chunk in run_response:
# print(chunk.content)
# Print the response in the terminal
agent.print_response("Share a 2 sentence horror story", stream=True)
Usage
Create a virtual environment
Open the Terminal
and create a python virtual environment.
python3 -m venv .venv
source .venv/bin/activate
Set your API key
export IBM_WATSONX_API_KEY=xxx
export IBM_WATSONX_PROJECT_ID=xxx
Install libraries
pip install -U ibm-watsonx-ai agno
Run Agent
python cookbook/models/ibm/watsonx/basic_stream.py
This example shows how to use streaming with IBM WatsonX. Setting stream=True
when calling print_response()
or run()
enables token-by-token streaming, which can provide a more interactive user experience.