Agno supports using Google Cloud Storage (GCS) as a storage backend for Workflows using the GcsJsonDb class. This storage backend stores session data as JSON blobs in a GCS bucket.

Usage

Configure your workflow with GCS storage to enable cloud-based session persistence.
gcs_for_workflow.py
import uuid
import google.auth
from agno.agent import Agent
from agno.db.gcs_json import GcsJsonDb
from agno.models.openai import OpenAIChat
from agno.team import Team
from agno.tools.duckduckgo import DuckDuckGoTools
from agno.tools.hackernews import HackerNewsTools
from agno.workflow.step import Step
from agno.workflow.workflow import Workflow

# Obtain the default credentials and project id from your gcloud CLI session.
credentials, project_id = google.auth.default()

# Generate a unique bucket name using a base name and a UUID4 suffix.
base_bucket_name = "example-gcs-bucket"
unique_bucket_name = f"{base_bucket_name}-{uuid.uuid4().hex[:12]}"
print(f"Using bucket: {unique_bucket_name}")

# Setup the JSON database
db = GcsJsonDb(
    bucket_name=unique_bucket_name,
    prefix="workflow/",
    project=project_id,
    credentials=credentials,
)

# Define agents
hackernews_agent = Agent(
    name="Hackernews Agent",
    model=OpenAIChat(id="gpt-5-mini"),
    tools=[HackerNewsTools()],
    role="Extract key insights and content from Hackernews posts",
)
web_agent = Agent(
    name="Web Agent",
    model=OpenAIChat(id="gpt-5-mini"),
    tools=[DuckDuckGoTools()],
    role="Search the web for the latest news and trends",
)

# Define research team for complex analysis
research_team = Team(
    name="Research Team",
    members=[hackernews_agent, web_agent],
    instructions="Research tech topics from Hackernews and the web",
)

content_planner = Agent(
    name="Content Planner",
    model=OpenAIChat(id="gpt-5-mini"),
    instructions=[
        "Plan a content schedule over 4 weeks for the provided topic and research content",
        "Ensure that I have posts for 3 posts per week",
    ],
)

# Define steps
research_step = Step(
    name="Research Step",
    team=research_team,
)

content_planning_step = Step(
    name="Content Planning Step",
    agent=content_planner,
)

# Create and use workflow
if __name__ == "__main__":
    content_creation_workflow = Workflow(
        name="Content Creation Workflow",
        description="Automated content creation from blog posts to social media",
        db=db,
        steps=[research_step, content_planning_step],
    )
    content_creation_workflow.print_response(
        input="AI trends in 2024",
        markdown=True,
    )

Prerequisites

1

Google Cloud SDK Setup

  1. Install the Google Cloud SDK
  2. Run gcloud init to configure your account and project
2

GCS Permissions

Ensure your account has sufficient permissions (e.g., Storage Admin) to create and manage GCS buckets:
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \
    --member="user:YOUR_EMAIL@example.com" \
    --role="roles/storage.admin"
3

Authentication

Use default credentials from your gcloud CLI session:
gcloud auth application-default login
Alternatively, if using a service account, set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of your service account JSON file.
4

Python Dependencies

Install the required Python packages:
pip install google-auth google-cloud-storage openai ddgs
1

Setup with Docker

For local testing without using real GCS, you can use fake-gcs-server.Create a docker-compose.yml file:
version: '3.8'
services:
  fake-gcs-server:
    image: fsouza/fake-gcs-server:latest
    ports:
      - "4443:4443"
    command: ["-scheme", "http", "-port", "4443", "-public-host", "localhost"]
    volumes:
      - ./fake-gcs-data:/data
Start the fake GCS server:
docker-compose up -d
2

Using Fake GCS with Docker

Set the environment variable to direct API calls to the emulator:
export STORAGE_EMULATOR_HOST="http://localhost:4443"
python gcs_for_agent.py
When using Fake GCS, authentication isn't enforced and the client will automatically detect the emulator endpoint.

Params

ParameterTypeDefaultDescription
bucket_namestr-Name of the GCS bucket where JSON files will be stored.
prefixOptional[str]-Path prefix for organizing files in the bucket. Defaults to "agno/".
session_tableOptional[str]-Name of the JSON file to store sessions (without .json extension).
memory_tableOptional[str]-Name of the JSON file to store user memories.
metrics_tableOptional[str]-Name of the JSON file to store metrics.
eval_tableOptional[str]-Name of the JSON file to store evaluation runs.
knowledge_tableOptional[str]-Name of the JSON file to store knowledge content.
projectOptional[str]-GCP project ID. If None, uses default project.
credentialsOptional[Any]-GCP credentials. If None, uses default credentials.