KhueApps
Home/DevOps/Fix "too many open files" in Docker daemon and containers

Fix "too many open files" in Docker daemon and containers

Last updated: October 07, 2025

Overview

The error "Too many open files" (EMFILE) occurs when a process exceeds its file descriptor (FD) limit. In Docker environments, you may hit limits at:

  • Kernel global file table (fs.file-max)
  • Docker daemon (dockerd) process limit
  • Default ulimit applied by Docker
  • Per-container ulimit

Fixing this requires setting sane limits across these layers and restarting affected services/containers.

Quickstart (most Linux hosts using systemd)

  1. Raise the Docker daemon process limit (persistent):
sudo mkdir -p /etc/systemd/system/docker.service.d
cat <<'EOF' | sudo tee /etc/systemd/system/docker.service.d/override.conf
[Service]
LimitNOFILE=1048576
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
  1. Set default ulimits for all new containers:
sudo mkdir -p /etc/docker
cat <<'EOF' | sudo tee /etc/docker/daemon.json
{
  "default-ulimits": {
    "nofile": {
      "Name": "nofile",
      "Soft": 65535,
      "Hard": 65535
    }
  }
}
EOF
sudo systemctl restart docker
  1. Override per-container if needed:
docker run --ulimit nofile=65535:65535 --name app your-image
  1. Docker Compose example:
version: "3.8"
services:
  app:
    image: alpine
    command: sh -c 'ulimit -n; sleep 3600'
    ulimits:
      nofile:
        soft: 65535
        hard: 65535
  1. Verify inside a container:
docker run --rm alpine sh -lc 'ulimit -n'

Minimal working example: reproduce and fix

This example opens files until it fails, then shows how increasing limits prevents the error.

Dockerfile:

FROM python:3.12-alpine
WORKDIR /app
COPY opener.py .
CMD ["python", "opener.py"]

opener.py:

import os
fds = []
try:
    while True:
        fds.append(os.open('/dev/null', os.O_RDONLY))
except OSError as e:
    print('Failure:', e)
    print('Opened FDs:', len(fds))

Build and run with default limits:

docker build -t fd-demo .
docker run --rm fd-demo
# Expect: OSError: [Errno 24] Too many open files; Opened FDs ~ soft limit

Run with higher per-container limit:

docker run --rm --ulimit nofile=65535:65535 fd-demo
# Should open many more FDs before failing (or not fail quickly)

Diagnosis

  • Check kernel global limit:
cat /proc/sys/fs/file-max
  • See current system-wide open FDs (approximate):
sudo lsof | wc -l
  • Inspect Docker daemon limits:
systemctl show docker -p LimitNOFILE
  • Check a container's process limits from the host (replace <cid>):
cat /proc/$(docker inspect -f '{{.State.Pid}}' <cid>)/limits | grep -i files
  • Inside a container:
ulimit -n

If any of these are low (e.g., 1024), increase them per the next section.

Step-by-step fixes (Linux)

  1. Kernel global file table (optional unless globally exhausted):
  • Temporary (until reboot):
sudo sysctl -w fs.file-max=2097152
  • Persistent:
echo 'fs.file-max=2097152' | sudo tee /etc/sysctl.d/99-fd.conf
sudo sysctl --system
  1. Docker daemon process limit via systemd:
  • Set LimitNOFILE so dockerd can hold enough FDs for all containers:
sudo mkdir -p /etc/systemd/system/docker.service.d
cat <<'EOF' | sudo tee /etc/systemd/system/docker.service.d/override.conf
[Service]
LimitNOFILE=1048576
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
  1. Default per-container ulimits via daemon.json:
  • Applies to new containers if no explicit --ulimit is set:
{
  "default-ulimits": {
    "nofile": { "Name": "nofile", "Soft": 65535, "Hard": 65535 }
  }
}
  • Restart Docker after changing:
sudo systemctl restart docker
  1. Per-container overrides at runtime:
  • docker run:
docker run --ulimit nofile=131072:131072 your-image
  • Docker Compose:
services:
  svc:
    image: your-image
    ulimits:
      nofile:
        soft: 131072
        hard: 131072
  1. Validate
  • Confirm the soft limit inside the running container matches your target:
docker exec -it <cid> sh -lc 'ulimit -n && cat /proc/$$/limits | grep -i files'

Common pitfalls

  • Only changing the soft limit: set both soft and hard; you cannot raise soft above hard from inside the container.
  • Forgetting to restart Docker: changes to systemd or daemon.json take effect only after a daemon restart.
  • Editing limits.conf: login shell limits do not apply to systemd services like Docker; use LimitNOFILE.
  • Compose integer vs mapping: prefer soft/hard mapping for clarity; a single numeric in older Compose sets soft only.
  • Docker Desktop/macOS/Windows: Docker runs in a Linux VM; adjust limits inside that VM. Host systemd settings may not propagate.
  • Extreme limits: very high per-process FD counts can increase kernel memory usage and degrade performance in FD-heavy syscalls if your app leaks FDs.

Performance notes

  • High limits do not allocate memory upfront; memory grows with the number of FDs actually opened.
  • Monitor FD usage to catch leaks early:
    • lsof -p <pid> | wc -l
    • cat /proc/<pid>/fd | wc -l
  • Prefer epoll/kqueue and connection pooling to avoid unbounded FD growth.
  • Use ulimit to cap runaway processes while giving healthy headroom (e.g., 65535 or 131072 for busy servers).

Cheat sheet: where to set what

LayerPurposeHow
KernelGlobal FD ceilingfs.file-max via sysctl
Docker daemonDaemon FD capsystemd LimitNOFILE
Default containerBaseline per-containerdaemon.json default-ulimits
Specific containerApp-specific needs--ulimit or Compose ulimits

FAQ

  • Q: Do I need to reboot?

    • A: No. Restart Docker after changing systemd or daemon.json. Reboot only if you changed kernel parameters and prefer not to use sysctl --system.
  • Q: Can I raise ulimit from inside a running container?

    • A: You can lower it, but you cannot raise it above the hard limit inherited at container start.
  • Q: What value should I use?

    • A: Common choices are 65535 or 131072. Size it based on peak concurrent files/sockets plus headroom.
  • Q: I increased limits but still see EMFILE. Why?

    • A: Check for FD leaks in the app, per-process limits of sidecars, and verify the daemon LimitNOFILE and fs.file-max are not saturated.

Series: Docker

DevOps