KhueApps
Home/DevOps/Docker Compose for FastAPI with Postgres, Redis, and NGINX/Caddy

Docker Compose for FastAPI with Postgres, Redis, and NGINX/Caddy

Last updated: October 06, 2025

Overview

This guide shows a practical Docker Compose setup for a FastAPI service backed by Postgres and Redis, fronted by either NGINX or Caddy. It includes healthchecks, sane defaults, and a minimal FastAPI app to verify everything works.

What you get:

  • FastAPI app running with Uvicorn
  • Postgres 16 with persistent volume
  • Redis 7 with AOF persistence
  • Reverse proxy via NGINX or Caddy (choose at run time)
  • Healthchecks and service dependencies

Minimal working example

Create the following files and directories:

  • app/Dockerfile
  • app/main.py
  • deploy/nginx.conf
  • deploy/Caddyfile
  • docker-compose.yml

docker-compose.yml:

version: "3.9"

services:
  api:
    build: ./app
    environment:
      - DATABASE_URL=postgresql+psycopg://postgres:postgres@db:5432/app
      - REDIS_URL=redis://redis:6379/0
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_healthy
    networks: [backend]

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: app
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -d app -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 10
    networks: [backend]

  redis:
    image: redis:7-alpine
    command: ["redis-server", "--appendonly", "yes"]
    volumes:
      - redisdata:/data
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 5s
      timeout: 3s
      retries: 10
    networks: [backend]

  nginx:
    image: nginx:1.27-alpine
    depends_on:
      api:
        condition: service_started
    volumes:
      - ./deploy/nginx.conf:/etc/nginx/nginx.conf:ro
    ports:
      - "8080:80"
    networks: [backend]
    profiles: ["nginx"]

  caddy:
    image: caddy:2-alpine
    depends_on:
      api:
        condition: service_started
    volumes:
      - ./deploy/Caddyfile:/etc/caddy/Caddyfile:ro
    ports:
      - "8080:80"
    networks: [backend]
    profiles: ["caddy"]

networks:
  backend: {}

volumes:
  pgdata: {}
  redisdata: {}

app/Dockerfile:

FROM python:3.12-slim
ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1
WORKDIR /app
RUN pip install --no-cache-dir fastapi uvicorn[standard] psycopg[binary] redis==5.0.6
COPY . .
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

app/main.py:

import os
import asyncio
from fastapi import FastAPI
import redis.asyncio as redis
import psycopg

app = FastAPI()

REDIS_URL = os.getenv("REDIS_URL", "redis://redis:6379/0")
DATABASE_URL = os.getenv("DATABASE_URL", "postgresql+psycopg://postgres:postgres@db:5432/app")

r = redis.from_url(REDIS_URL, decode_responses=True)

@app.get("/health")
def health():
    return {"status": "ok"}

@app.get("/redis")
async def hit():
    cnt = await r.incr("hits")
    return {"hits": cnt}

@app.get("/db")
async def db_ping():
    # psycopg 3 async connection
    async with await psycopg.AsyncConnection.connect(DATABASE_URL) as aconn:
        async with aconn.cursor() as cur:
            await cur.execute("select 1")
            row = await cur.fetchone()
            return {"db_select_1": row[0]}

deploy/nginx.conf:

worker_processes auto;
events { worker_connections 1024; }
http {
  upstream api_upstream { server api:8000; }
  server {
    listen 80;
    location / {
      proxy_pass http://api_upstream;
      proxy_http_version 1.1;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_read_timeout 60s;
    }
  }
}

deploy/Caddyfile:

:80 {
  reverse_proxy api:8000
}

Quickstart

  1. Create directories: app and deploy.
  2. Add the files above into those directories.
  3. Build and start with NGINX:
    • docker compose --profile nginx up --build -d
  4. Verify:
    • curl http://localhost:8080/health → {"status":"ok"}
    • curl http://localhost:8080/redis → increments counter
    • curl http://localhost:8080/db → returns 1
  5. Switch to Caddy (stop NGINX profile first):
    • docker compose --profile nginx down
    • docker compose --profile caddy up -d
  6. Inspect logs if needed:
    • docker compose logs -f api
    • docker compose logs -f db

How it works

  • api builds a small FastAPI image and exposes port 8000 internally.
  • db and redis provide data stores with persistent volumes.
  • nginx or caddy acts as the public entrypoint on host port 8080.
  • Healthchecks ensure db and redis are ready before api starts handling traffic.
  • All services share the backend network for name-based discovery: api reaches db via host db, redis via host redis.

Common pitfalls

  • depends_on is not a readiness gate for api; we add healthchecks on db and redis to reduce race conditions. For more complex setups, implement app-level retries.
  • Port conflicts: ensure 8080 is free, or change the host port mapping.
  • Incorrect DATABASE_URL: psycopg 3 accepts URLs like postgresql+psycopg://user:pass@host:5432/db. Mismatched drivers or typos cause connection failures.
  • File permission issues on bind mounts (especially on Windows/WSL): prefer volumes for Postgres data, and avoid bind-mounting its data directory.
  • Redis persistence defaults: AOF is enabled here. If you remove it, you’ll lose counters/state across restarts.
  • Large responses timing out: increase proxy_read_timeout in NGINX or tune Caddy timeouts if you stream big payloads.

Performance notes

  • Use a production server: for heavier loads, run FastAPI under Gunicorn with Uvicorn workers (e.g., 2–4 workers, 1–2 threads) rather than a single Uvicorn process.
  • Connection pooling: open long-lived DB pools on startup instead of per-request connects. psycopg 3 supports AsyncConnectionPool; Redis client already pools connections.
  • Keep images slim: pin alpine base images, avoid unnecessary packages, and use pip --no-cache-dir. Consider a multi-stage build if you compile dependencies.
  • Enable HTTP keep-alive: NGINX and Caddy default to keep-alive; ensure upstream timeouts are reasonable to avoid premature disconnects.
  • Caching: use Redis for application-level caching and short-lived response caching at the proxy when safe.
  • Healthcheck cadence: avoid overly aggressive healthchecks that waste CPU. The provided intervals are conservative.

Extending this setup

  • Add Alembic migrations by mounting a migrations directory and running a one-off container: docker compose run --rm api alembic upgrade head.
  • Serve static files: let NGINX/Caddy serve static assets directly from a volume instead of the app.
  • HTTPS: in production, terminate TLS at NGINX or let Caddy obtain certificates automatically. For local development, keep plaintext on :8080.

Troubleshooting

  • 502 Bad Gateway at proxy:
    • Check api logs for startup errors.
    • Confirm the proxy points to api:8000 and the container is on the same network.
  • Postgres auth errors:
    • Ensure POSTGRES_PASSWORD matches the password in DATABASE_URL.
    • Drop the volume if you changed credentials after first run: docker volume rm <project>_pgdata.
  • Redis not persisting:
    • Verify AOF is enabled and redisdata volume is attached.
  • Slow responses:
    • Check DB query plans and add indices. Increase worker count. Verify no per-request DB connections are created in hot paths.

FAQ

  • Can I run both NGINX and Caddy together?
    • Yes, but map them to different host ports to avoid collisions. Profiles make it easy to run one at a time.
  • How do I add HTTPS locally?
    • Use Caddy with local TLS (caddy trust) or terminate TLS on a dev cert in NGINX. For simplicity, this guide uses HTTP on :8080.
  • Where do I change credentials?
    • In docker-compose.yml environment for db and in DATABASE_URL for api. Prefer using a .env file with docker compose --env-file.
  • How do I persist Postgres data?
    • The pgdata named volume persists across container restarts. Remove it only if you want a clean slate.

Series: Docker

DevOps