KhueApps
Home/DevOps/Fix Docker 'layer does not exist' from dangling images

Fix Docker 'layer does not exist' from dangling images

Last updated: October 07, 2025

Overview

The Docker error "Error response from daemon: layer does not exist" often appears when containers or images reference layers that were garbage‑collected or left dangling (<none>:<none>). This guide shows how to diagnose and fix it quickly and safely.

Quickstart (most cases)

  • List dangling images:
    • docker images -f dangling=true
  • Remove dangling image layers:
    • docker image prune -f
  • Remove stale containers referencing missing layers (replace names/IDs):
    • docker ps -a --filter status=exited
    • docker rm -f <container>
  • Rebuild or re‑pull images and recreate containers:
    • docker compose up -d --build
    • or: docker build -t myimg:latest . && docker run --rm myimg:latest
  • If the daemon is stuck, restart it and prune build cache:
    • Linux: sudo systemctl restart docker
    • docker builder prune -f

Minimal working example

The example below creates a dangling image, prunes it, then demonstrates the error when a container still references removed layers. Finally, it shows the fix.

#!/usr/bin/env bash
set -euxo pipefail

# 1) Prepare a simple image (v1) and a container
workdir=$(mktemp -d)
cd "$workdir"
cat > Dockerfile <<'EOF'
FROM alpine:3.19
RUN echo v1 > /version
CMD ["cat", "/version"]
EOF

docker build -t demo:latest .
docker run --name demo_v1 -d demo:latest sleep 600

echo "Built and started demo_v1 based on v1"

# 2) Replace the image (v2), making the old image a future dangling candidate
cat > Dockerfile <<'EOF'
FROM alpine:3.19
RUN echo v2 > /version
CMD ["cat", "/version"]
EOF

docker build -t demo:latest .

# 3) Stop the old container and prune dangling images
docker stop demo_v1 || true

echo "Dangling images before prune:" && docker images -f dangling=true || true
docker image prune -f

# 4) Attempt to start the old container, which still references removed layers
set +e
docker start demo_v1
status=$?
set -e
if [ $status -ne 0 ]; then
  echo "Expected failure: container references a pruned image layer"
fi

# 5) Fix: remove the stale container and recreate from the rebuilt image
docker rm -f demo_v1 || true
docker run --name demo_v2 --rm demo:latest cat /version

What you’ll observe:

  • After pruning, starting demo_v1 may fail with a layer-related error.
  • Recreating the container from the current image works.

Step-by-step diagnosis and fix

  1. Identify dangling images and missing-layer references
  • List dangling images: docker images -f dangling=true
  • Show disk usage and references: docker system df -v
  • Find containers that failed recently: docker ps -a --filter status=exited
  1. Safely remove stale containers
  • If a container references missing layers, remove it and plan to recreate:
    • docker rm -f <container>
  • With Compose/Swarm, remove via your orchestrator to keep configs consistent.
  1. Prune unused images and cache
  • Remove dangling images only: docker image prune -f
  • Remove build cache (BuildKit): docker builder prune -f
  • For heavy cleanup (careful): docker system prune -a --volumes -f
    • Use filters to constrain scope, e.g.: --filter until=24h
  1. Rebuild or repull images
  • Local Dockerfile: docker build -t myorg/app:latest .
  • From registry: docker pull myorg/app:latest
  • With Compose: docker compose up -d --build
  1. Restart the Docker daemon if state is inconsistent
  • Linux: sudo systemctl restart docker
  • Desktop: restart Docker Desktop
  • Then re-run prune and rebuild steps.
  1. Verify resolution
  • Start containers: docker run --rm myorg/app:latest <cmd>
  • Check logs for layer errors: docker logs <container>
  • Confirm no dangling leftovers: docker images -f dangling=true

Why this happens

  • Re-tagging images without keeping old tags produces untagged (<none>) images.
  • docker image prune removes unreferenced layers; containers created from those layers can break upon restart.
  • Interrupted pulls/builds or storage-driver issues can leave partial layer metadata.

Prevent it next time

  • Use immutable tags (e.g., app:1.2.3 or app:commit-sha) instead of only :latest.
  • Recreate containers when updating images instead of reusing old ones:
    • docker compose up -d --force-recreate --pull always
  • Automate pruning with filters to avoid nuking recent artifacts:
    • docker image prune -f --filter until=168h
    • docker builder prune -f --filter until=168h
  • Add a .dockerignore to reduce layer churn and cache invalidation.

Pitfalls to avoid

  • docker system prune -a --volumes deletes unused images and volumes. You may lose data if volumes hold state. Keep data in named volumes and back them up.
  • For Compose stacks, removing containers manually can drift from the declared state. Prefer docker compose down/up or redeploy.
  • For running containers, docker rmi -f on their base images forces re-pull on restart and can cause downtime.
  • On CI agents, frequent rebuilds without immutable tags will balloon dangling images. Tag with commit SHAs and prune by age.

Performance notes

  • Pruning scans metadata; on hosts with many images, use filters to bound work (e.g., --filter until=24h or --filter label!=keep).
  • Build cache speeds rebuilds. Prefer docker builder prune over docker system prune when only cache is the issue.
  • Reduce unnecessary layers to shrink storage pressure and prune time.
  • If you rely on warm image caches (registries slow or offline), avoid aggressive prune -a in production nodes; schedule age-based pruning during low traffic.

Tiny FAQ

  • Q: Will docker image prune delete images I still use?
    • A: No. It removes only untagged images not referenced by any container.
  • Q: I still see the error after prune. What next?
    • A: Restart the daemon, remove stale containers, prune builder cache, then rebuild/pull.
  • Q: Can I fix this without deleting containers?
    • A: If a container references missing layers, you must recreate it from a valid image.
  • Q: Does this indicate disk or driver corruption?
    • A: Sometimes. Check docker info (storage driver), dmesg/syslog, and consider a daemon restart before deeper investigation.

Series: Docker

DevOps