Documentation Index
Fetch the complete documentation index at: https://docs.agno.com/llms.txt
Use this file to discover all available pages before exploring further.
Agno supports using Google Cloud Storage (GCS) as a storage backend for Workflows using the GcsJsonDb class. This storage backend stores session data as JSON blobs in a GCS bucket.
Usage
Configure your workflow with GCS storage to enable cloud-based session persistence.
import uuid
import google.auth
from agno.agent import Agent
from agno.db.gcs_json import GcsJsonDb
from agno.models.openai import OpenAIResponses
from agno.team import Team
from agno.tools.hackernews import HackerNewsTools
from agno.tools.hackernews import HackerNewsTools
from agno.workflow.step import Step
from agno.workflow.workflow import Workflow
# Obtain the default credentials and project id from your gcloud CLI session.
credentials, project_id = google.auth.default()
# Generate a unique bucket name using a base name and a UUID4 suffix.
base_bucket_name = "example-gcs-bucket"
unique_bucket_name = f"{base_bucket_name}-{uuid.uuid4().hex[:12]}"
print(f"Using bucket: {unique_bucket_name}")
# Setup the JSON database
db = GcsJsonDb(
bucket_name=unique_bucket_name,
prefix="workflow/",
project=project_id,
credentials=credentials,
)
# Define agents
hackernews_agent = Agent(
name="Hackernews Agent",
model=OpenAIResponses(id="gpt-5.2"),
tools=[HackerNewsTools()],
role="Extract key insights and content from Hackernews posts",
)
web_agent = Agent(
name="Web Agent",
model=OpenAIResponses(id="gpt-5.2"),
tools=[HackerNewsTools()],
role="Search the web for the latest news and trends",
)
# Define research team for complex analysis
research_team = Team(
name="Research Team",
members=[hackernews_agent, web_agent],
instructions="Research tech topics from Hackernews and the web",
)
content_planner = Agent(
name="Content Planner",
model=OpenAIResponses(id="gpt-5.2"),
instructions=[
"Plan a content schedule over 4 weeks for the provided topic and research content",
"Ensure that I have posts for 3 posts per week",
],
)
# Define steps
research_step = Step(
name="Research Step",
team=research_team,
)
content_planning_step = Step(
name="Content Planning Step",
agent=content_planner,
)
# Create and use workflow
if __name__ == "__main__":
content_creation_workflow = Workflow(
name="Content Creation Workflow",
description="Automated content creation from blog posts to social media",
db=db,
steps=[research_step, content_planning_step],
)
content_creation_workflow.print_response(
input="AI trends in 2024",
markdown=True,
)
Prerequisites
Google Cloud SDK Setup
- Install the Google Cloud SDK
- Run
gcloud init to configure your account and project
GCS Permissions
Ensure your account has sufficient permissions (e.g., Storage Admin) to create and manage GCS buckets:gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \
--member="user:YOUR_EMAIL@example.com" \
--role="roles/storage.admin"
Authentication
Use default credentials from your gcloud CLI session:gcloud auth application-default login
Alternatively, if using a service account, set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of your service account JSON file. Python Dependencies
Install the required Python packages:pip install google-auth google-cloud-storage openai ddgs
Setup with Docker
For local testing without using real GCS, you can use fake-gcs-server.Create a docker-compose.yml file:version: '3.8'
services:
fake-gcs-server:
image: fsouza/fake-gcs-server:latest
ports:
- "4443:4443"
command: ["-scheme", "http", "-port", "4443", "-public-host", "localhost"]
volumes:
- ./fake-gcs-data:/data
Start the fake GCS server: Using Fake GCS with Docker
Set the environment variable to direct API calls to the emulator:export STORAGE_EMULATOR_HOST="http://localhost:4443"
python gcs_for_agent.py
When using Fake GCS, authentication isn't enforced and the client will automatically detect the emulator endpoint.
Params
| Parameter | Type | Default | Description |
|---|
id | Optional[str] | - | The ID of the database instance. UUID by default. |
bucket_name | str | - | Name of the GCS bucket where JSON files will be stored. |
prefix | Optional[str] | - | Path prefix for organizing files in the bucket. Defaults to "agno/". |
session_table | Optional[str] | - | Name of the JSON file to store sessions (without .json extension). |
memory_table | Optional[str] | - | Name of the JSON file to store user memories. |
metrics_table | Optional[str] | - | Name of the JSON file to store metrics. |
eval_table | Optional[str] | - | Name of the JSON file to store evaluation runs. |
knowledge_table | Optional[str] | - | Name of the JSON file to store knowledge content. |
traces_table | Optional[str] | - | Name of the JSON file to store traces. |
spans_table | Optional[str] | - | Name of the JSON file to store spans. |
project | Optional[str] | - | GCP project ID. If None, uses default project. |
credentials | Optional[Any] | - | GCP credentials. If None, uses default credentials. |