For a detailed overview of response caching, see Response Caching.
This is different from Anthropic’s prompt caching feature. Response caching caches the entire model response, while prompt caching caches the system prompt to reduce processing time.
Basic Usage
Enable caching by settingcache_response=True when initializing the model. The first call will hit the API and cache the response, while subsequent identical calls will return the cached result.
cache_model_response.py
Usage
1
Set up your virtual environment
2
Set your API key
3
Install dependencies
4
Run Agent