Skip to main content
AgentOS is a FastAPI app. Deploy it like any normal Python service: a container, a Postgres database, a public hostname, and a few env vars.
from agno.os import AgentOS

agent_os = AgentOS(agents=[agent], db=db, tracing=True)
app = agent_os.get_app()

if __name__ == "__main__":
    agent_os.serve(app="my_app:app", reload=False)

What you need to ship

ResourceWhy
Container hostRuns the FastAPI process
PostgreSQLSessions, memory, knowledge, traces, schedules
Public hostnameRequired for Slack, Telegram, WhatsApp interfaces
HTTPSRequired for every webhook interface; terminate at your load balancer or reverse proxy
Env varsAt minimum OPENAI_API_KEY and JWT_VERIFICATION_KEY (in prod)
AgentOS handles queues, worker pools, the scheduler, and JWT auth in-process. No separate worker fleet, no separate auth server, no separate cron container.

Local with Docker Compose

services:
  agentos:
    build: .
    ports:
      - "8000:8000"
    env_file: .env
    depends_on:
      - db

  db:
    image: agnohq/pgvector:18
    environment:
      POSTGRES_USER: ai
      POSTGRES_PASSWORD: ai
      POSTGRES_DB: ai
    volumes:
      - pgdata:/var/lib/postgresql/data

volumes:
  pgdata:
agnohq/pgvector is Postgres 18 with the pgvector extension preinstalled — needed for knowledge embeddings.
docker compose up -d --build
curl http://localhost:8000/health
The Scout, Dash, and Coda tutorials all start here.

Railway

Each tutorial template ships with Railway scripts for one-command deploys:
railway login
./scripts/railway_up.sh        # provision Postgres + app, get a domain
./scripts/railway_env.sh       # sync .env.production to the service
./scripts/railway_redeploy.sh  # push code updates
up.sh provisions the project, adds pgvector with a persistent volume, creates the app service with env vars, and assigns a public domain. Walkthroughs: Scout deploy, Dash deploy, Coda deploy.

AWS, GCP, and Azure

Any container platform works. The shape:
ComponentService optionsWhat runs in it
App serviceECS Fargate, Cloud Run, App ServiceThe AgentOS container, port 8000
PostgresRDS, Cloud SQL, Postgres Flexible ServerSessions, memory, knowledge, traces
Load balancer / ingressALB, Cloud Load Balancing, Application GatewayPublic HTTPS termination
Secret managerSecrets Manager, Secret Manager, Key VaultOPENAI_API_KEY, JWT_VERIFICATION_KEY
Health check the /health endpoint. AgentOS responds {"status":"ok"} when the app is ready.

Scaling

AgentOS is stateless. State lives in db. Scale horizontally:
ConcernSolution
ThroughputAdd app replicas behind a load balancer
LLM rate limitsUse a queue or rate limiter in front of the model client
Long-running runsUse background=true on the run endpoint, then poll for completion (see Serve as an API)
Side effects without blocking the responseBackground hooks with run_in_background=True
Schedule fan-outThe scheduler runs on a single replica’s lifespan; for HA, use leader election or pin scheduling to one replica
Trace volumeUse a separate trace_db to keep the primary lean (see Observability)
For the leader-election pattern with multiple replicas, see Scheduler HA.

Production checklist

Auth and secrets
  • RUNTIME_ENV=prd enables JWT auth
  • JWT_VERIFICATION_KEY set (see Security & Auth)
  • OPENAI_API_KEY and other model keys in a secret manager, not in source
Infrastructure
  • Postgres has a persistent volume or managed backup
  • HTTPS terminating at your load balancer or reverse proxy
  • Health check pointed at /health
Operational
  • Tracing on (tracing=True) so you can debug bad runs
  • At least one interface wired up
  • Pre-hooks for PII or injection guarding if you handle untrusted input
  • requires_confirmation=True on irreversible tools

Updating your deployment

Code changes: git push if CI auto-deploys, or ./scripts/railway_redeploy.sh. Env changes: ./scripts/railway_env.sh (Railway auto-redeploys when env values change). Database changes: AgentOS handles its own tables — schema changes are additive and forward-compatible, so no migration tool is required for stock AgentOS tables. Application tables you migrate however you like (Alembic, raw SQL, dbt, your call).

Next

Build a Product →