Overview
This guide shows practical ways to back up and restore PostgreSQL running in Docker Compose. It uses pg_dump/pg_restore for single databases and pg_dumpall for full cluster backups. All commands use docker compose (v2). Replace with docker-compose if you use v1.
Minimal working example
A small Compose file with a Postgres service and a bind-mounted backups directory.
description: Minimal Postgres + backups
services:
db:
image: postgres:16
container_name: pg
environment:
- POSTGRES_USER=app
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=appdb
- TZ=UTC
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
- ./backups:/backups
volumes:
pgdata:
Quick test data:
# Start database
docker compose up -d
# Create a sample table and data
docker compose exec -T -u postgres db psql -d appdb -c "CREATE TABLE items(id serial PRIMARY KEY, name text);"
docker compose exec -T -u postgres db psql -d appdb -c "INSERT INTO items(name) VALUES ('alpha'), ('beta');"
# Backup (custom format)
ts=$(date +%F_%H%M%S)
docker compose exec -T -u postgres db pg_dump -Fc -d appdb -f /backups/appdb_$ts.dump
# Drop and restore
docker compose exec -T -u postgres db psql -d postgres -c "DROP DATABASE IF EXISTS appdb; CREATE DATABASE appdb;"
docker compose exec -T -u postgres db pg_restore --clean --no-owner -d appdb /backups/appdb_$ts.dump
# Verify
docker compose exec -T -u postgres db psql -d appdb -c "TABLE items;"
Quickstart
- Use pg_dump for a single database, pg_dumpall for the entire cluster.
- Run utilities inside the Postgres container as the postgres OS user.
- Save backups to a bind-mounted host path (e.g., ./backups) for persistence.
- Restore with pg_restore for custom/directory format, or psql for plain SQL.
Backup options
- Single database (recommended: custom format)
# Custom format (-Fc) supports selective restore and compression
TS=$(date +%F_%H%M%S)
docker compose exec -T -u postgres db \
pg_dump -Fc -d appdb -f /backups/appdb_${TS}.dump
- Entire cluster (all databases, roles, globals)
# Plain SQL format, includes roles and tablespaces
TS=$(date +%F_%H%M%S)
docker compose exec -T -u postgres db \
pg_dumpall -f /backups/cluster_${TS}.sql
- Compressed plain SQL (smaller but slower to compress)
TS=$(date +%F_%H%M%S)
docker compose exec -T -u postgres db bash -lc \
"pg_dump -d appdb | gzip > /backups/appdb_${TS}.sql.gz"
- Parallel backup for large databases (directory format)
TS=$(date +%F_%H%M%S)
docker compose exec -T -u postgres db \
pg_dump -Fd -j 4 -d appdb -f /backups/appdb_dir_${TS}
Notes:
- -Fc (custom) and -Fd (directory) are flexible; -Fd supports -j parallelism.
- Running as -u postgres uses local socket auth and avoids password prompts.
Restore recipes
- Restore a single database from custom format dump
# Recreate target DB and restore into it
FILE=/backups/appdb_YYYY-MM-DD_HHMMSS.dump
# Connect to maintenance DB for (re)creation
docker compose exec -T -u postgres db psql -d postgres -c \
"DROP DATABASE IF EXISTS appdb; CREATE DATABASE appdb;"
# Restore objects and data
docker compose exec -T -u postgres db \
pg_restore --clean --no-owner -d appdb "$FILE"
- Restore a single database with --create from custom format
# Connect to maintenance DB; let pg_restore create appdb
docker compose exec -T -u postgres db \
pg_restore --clean --create --no-owner -d postgres /backups/appdb_*.dump
- Restore directory format in parallel
# Use the same -j as or less than the dump's job count
docker compose exec -T -u postgres db \
pg_restore -j 4 --clean --no-owner -d appdb /backups/appdb_dir_YYYY-MM-DD_HHMMSS
- Restore entire cluster from pg_dumpall (plain SQL)
# This recreates roles and databases; run once on a fresh cluster
FILE=/backups/cluster_YYYY-MM-DD_HHMMSS.sql
docker compose exec -T -u postgres db psql -f "$FILE"
Numbered steps (from zero to done)
- Create docker-compose.yml as shown and mkdir -p backups.
- Start services: docker compose up -d.
- Verify connectivity: psql, create sample data.
- Choose backup method: pg_dump (single DB) or pg_dumpall (cluster).
- Run backup inside the db container, writing to /backups.
- Copy backups off-host if needed: cp -a backups/ to secure storage.
- Restore test: drop/recreate DB and run pg_restore; verify data.
- Automate via cron or CI using the same commands.
Automation snippet (cron on host)
# Run every night at 02:30, keep 14 days
# crontab -e
30 2 * * * cd /path/to/compose && TS=$(date +\%F_\%H\%M\%S) \
&& docker compose exec -T -u postgres db pg_dump -Fc -d appdb -f /backups/appdb_${TS}.dump \
&& find backups -name 'appdb_*.dump' -mtime +14 -delete
Pitfalls and how to avoid them
- Version mismatch: Use the same or newer pg_dump than the server. Running inside the container guarantees this.
- Missing backups on host: Ensure ./backups is bind-mounted. Without it, files remain only inside the container.
- Permissions: On some hosts, root creates backups. Use correct umask or chown after dump if needed.
- Long locks: pg_dump uses a consistent snapshot with minimal locks but large schemas can hold locks longer; schedule off-peak.
- Owners and grants: Use --no-owner and restore as a superuser, then grant privileges explicitly if restoring to different roles.
- Large objects (BLOBs): pg_dump includes them by default; confirm with --blobs for older versions if needed.
- Encoding and locale: Ensure the target DB has compatible encoding; using --create avoids mismatches.
- Disk space: Dumps can be large; monitor both container and host free space.
Performance notes
- Use directory format (-Fd) with -j N to parallelize large backups and restores.
- Compress custom dumps with -Z to balance size vs CPU, e.g., pg_dump -Fc -Z6.
- Exclude bulky tables or indexes if acceptable: --exclude-table-data='schema.table'.
- Increase maintenance_work_mem temporarily on the server for faster index builds during restore.
- Place backups on fast storage; avoid network bottlenecks for large dumps.
Tiny FAQ
Q: Where are backups stored? A: In the host directory mapped to ./backups, appearing inside the container as /backups.
Q: How do I restore into a brand-new container? A: Start the new Compose stack with the same image, mount the backups directory, then run the restore commands inside the new db container.
Q: How do I back up all databases and roles? A: Use pg_dumpall to produce a cluster-wide SQL file, then restore with psql.
Q: Can I just copy the data volume for backup? A: Not safely while Postgres is running. Prefer logical dumps or use physical backup tools with proper WAL handling.
Q: docker compose vs docker-compose? A: Commands are identical; docker compose is v2. Replace with docker-compose if your system uses v1.