Your Docker host is running out of disk space. You’ve got dangling images, stopped containers, unused volumes, and 500GB of build cache nobody needs. You could delete it all with docker system prune -a, but that nuclear option will delete more than you intended.
Let’s be surgical about this.
Understand What’s Eating Space
First, check what’s using disk:
$ docker system dfTYPE TOTAL ACTIVE SIZE RECLAIMABLEImages 42 8 18.5GB 14.2GBContainers 89 3 2.1GB 2.0GBLocal Volumes 12 2 500MB 400MBBuild Cache - - 12.3GB 9.8GBThis tells you:
- 42 images: 18.5GB total, but only 8 are in use, 14.2GB can be freed
- 89 containers: Most are stopped, 2.0GB total unused
- 12 volumes: 400MB unused
- Build cache: 9.8GB that can be reclaimed
The build cache is usually the biggest culprit.
The Safe Cleanup Path
Step 1: Stop and Remove Unused Containers
Stopped containers take up space. Remove them:
# See stopped containers$ docker ps -a --filter "status=exited"
# Remove all stopped containers$ docker container prune -fDeleted Containers:abc123def456def456ghi789...
Deleted Networks:old_network_1old_network_2This is safe. Stopped containers aren’t running anything.
Step 2: Remove Dangling Images
Dangling images are layers that have no tag and aren’t used by any container. They’re orphaned:
# See dangling images$ docker images --filter "dangling=true"REPOSITORY TAG IMAGE ID CREATED SIZE<none> <none> abc123def456 3 weeks ago 850MB<none> <none> def456ghi789 2 weeks ago 1.2GB
# Remove them$ docker image prune -fDeleted Images:deleted sha256:abc123def456deleted sha256:def456ghi789
Space reclaimed: 2.1GBSafe. These images aren’t tagged and nothing uses them.
Step 3: Clean Up Build Cache Selectively
Build cache accumulates fast, especially if you change Dockerfiles frequently. You can clean it smartly:
# See detailed cache usage$ docker builder du
ID: abc123def456 BUILD ID: cache1 CACHED SIZE: 500MBID: def456ghi789 BUILD ID: cache2 CACHED SIZE: 300MB...
# Prune cache (keeps recent builds, removes old)$ docker builder prune -fSpace reclaimed: 8.5GBOr be more aggressive:
# Remove all cache (nuke it)$ docker builder prune -a -fSpace reclaimed: 12.3GB # But next build will be slowStep 4: Remove Unused Volumes
Volumes persist data. Be careful here—deleting a volume deletes data:
# See unused volumes (not mounted by any container)$ docker volume ls --filter "dangling=true"DRIVER VOLUME NAMElocal olddb_data_1local test_volume_2
# Remove them$ docker volume prune -fDeleted Volumes:olddb_data_1test_volume_2
Space reclaimed: 450MBOnly delete volumes you don’t need. If in doubt, leave them.
Step 5: Remove Unused Images (Carefully)
This is where you can accidentally delete images you wanted to keep. Be selective:
# See all images$ docker images
REPOSITORY TAG IMAGE ID SIZEubuntu 22.04 abc123def456 77MBmyapp latest def456ghi789 500MBnode 18 ghi789jkl012 900MBold-python 3.8 jkl012mno345 800MB
# Remove a specific old image$ docker rmi old-python:3.8Untagged: old-python:3.8Deleted: jkl012mno345
# DON'T do this (removes all unused images):# $ docker image prune -a # ← This deletes even images you might useThe Nuclear Option: docker system prune
docker system prune is convenient but aggressive:
# Remove stopped containers, dangling images, unused networks$ docker system prune -fDeleted Containers:abc123def456...
Deleted Images:def456ghi789...
Deleted Networks:old_network
Space reclaimed: 3.2GBAdd -a and it gets nuclear:
# DANGER: Remove everything not in use$ docker system prune -a -f# Deletes: stopped containers, all unused images, unused volumes, networks# You may delete images you wanted to keepUse --filter to be more selective:
# Only prune things older than 72 hours$ docker system prune --filter "until=72h" -fPrevention: Limit What Docker Stores
If you want to avoid the cleanup dance, set limits in /etc/docker/daemon.json:
{ "storage-driver": "overlay2", "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3" }, "build-cache": { "gc": { "policy": [{"all": false, "unused-for": 1209600}] } }}Restart Docker:
$ sudo systemctl restart dockerNow:
- Container logs are limited to 30MB total (3 files × 10MB)
- Build cache older than 14 days is auto-cleaned
- You avoid manual cleanup sprints
Safe Cleanup Script
Here’s a script that cleans up safely without deleting important stuff:
#!/bin/bashset -e
echo "=== Docker Cleanup ==="
echo "1. Removing stopped containers..."docker container prune -f
echo "2. Removing dangling images..."docker image prune -f
echo "3. Pruning build cache (keeps recent)..."docker builder prune -f
echo "4. Removing unused volumes..."docker volume prune -f
echo "5. Current disk usage:"docker system df
echo "=== Done ==="Run it:
$ chmod +x cleanup.sh$ ./cleanup.shChecklist Before Big Cleanup
- Backup important volumes:
docker run --rm -v data:/data -v /backup:/backup -i busybox tar czf /backup/data.tar.gz /data - Check what will be deleted:
docker system dffirst - Don’t use
-aunless you’re sure - Test in staging or dev first
- Have a rollback plan if something breaks
- Consider setting log limits in daemon.json to prevent future bloat
What to Do Every Week
# Weekly cleanup (safe)$ docker container prune -f$ docker image prune -f$ docker volume prune -f$ docker system dfThis takes 30 seconds and prevents disk bloat. Better than waiting until you’re at 100% capacity.
Docker cleanup isn’t scary if you understand what each command does. Remove stopped containers and dangling images without hesitation. Be careful with volumes. Don’t nuke build cache unless you’re really constrained. And set log limits to avoid surprises.