You’ve been running containers for a year. Everything’s fine. Then one day your disk is 95% full and you’re panicking at 2 AM trying to figure out what happened.
Here’s the thing: Docker’s default logging driver writes JSON logs directly to disk, and it doesn’t rotate them. Not by default. Not ever, unless you tell it to. That container that spams debug logs? That’s a multi-gigabyte file sitting on your filesystem right now, silently growing.
The Problem
When you run docker logs mycontainer, you’re reading from /var/lib/docker/containers/<container-id>/<container-id>-json.log. That file has no size limit. No rotation. Just infinite growth until your disk screams.
Run this on any Docker host that’s been running for a while:
$ du -sh /var/lib/docker/containers/*/18G /var/lib/docker/containers/abc123def456/2.1G /var/lib/docker/containers/xyz789abc123/...Yeah. That’s your problem.
The Solution: daemon.json
The nuclear option is to set log rotation globally in Docker’s daemon config. This applies to all new containers.
Edit /etc/docker/daemon.json:
{ "log-driver": "json-file", "log-opts": { "max-size": "100m", "max-file": "3" }}What this does:
- max-size: 100m — Each log file stops at 100MB, then rotates
- max-file: 3 — Keep 3 rotated files (100MB × 3 = 300MB max per container)
After editing, restart Docker:
sudo systemctl restart dockerThis applies to containers created after the restart. Existing containers keep their old unbounded logs. You’ll need to either restart them or prune manually.
Per-Container Override
You can override the global setting per container. In docker run:
docker run -d \ --name chatty-app \ --log-driver json-file \ --log-opt max-size=50m \ --log-opt max-file=2 \ myimage:latestOr in docker-compose:
services: web: image: nginx:latest logging: driver: "json-file" options: max-size: "50m" max-file: "2"Check Current Disk Usage
See how much space your logs are actually using:
$ find /var/lib/docker/containers -name '*.log' -exec du -h {} + | sort -hr | head -2018G /var/lib/docker/containers/abc123/abc123-json.log2.1G /var/lib/docker/containers/xyz789/xyz789-json.log...Or a cleaner summary:
du -sh /var/lib/docker/containers/*/ | sort -hrClean Up Old Logs
Once you’ve set rotation, you can safely clean up the bloated files. Docker gives you a built-in command:
docker system pruneThis removes stopped containers, unused images, dangling volumes, and log files for deleted containers. Be careful — if you have stopped containers you want to keep, use:
docker container pruneFor surgical removal of logs from a specific container (without deleting the container):
truncate -s 0 /var/lib/docker/containers/<container-id>/<container-id>-json.logThis clears the file without restarting the container.
Log Rotation is Happening But…
If you set max-size: 100m and the file is still growing past 100MB, Docker is rotating — but you might not notice because:
- The rotated files are still on disk (they’re just
.1,.2, etc.) - A single burst of logs can exceed max-size before rotation kicks in
- You’re looking at total disk usage across many containers
Check for backup logs:
ls -lah /var/lib/docker/containers/*/You’ll see *-json.log, *-json.log.1, *-json.log.2, etc.
Alternative: journald Driver
If you’re on a systemd-based host, swap the log driver entirely to journald. Logs go into the journal, which handles rotation automatically via journald.conf:
{ "log-driver": "journald"}Read them like any other systemd log:
journalctl CONTAINER_NAME=mycontainer -fThe downside: docker logs stops working. Decide what trade-off you can live with.
Set Sane Defaults Before You Need Them
The right time to configure log rotation is before your disk fills up, not after. If you’re setting up a new Docker host, do this on day one:
# Check if daemon.json already existscat /etc/docker/daemon.json 2>/dev/null || echo "File not found — safe to create"
# Add rotation configsudo tee /etc/docker/daemon.json > /dev/null << 'EOF'{ "log-driver": "json-file", "log-opts": { "max-size": "50m", "max-file": "5" }}EOF
sudo systemctl restart docker50MB × 5 files = 250MB maximum per container. Reasonable for most workloads. Adjust max-size up for verbose apps, down for quiet ones.
Audit Your Current Setup
Before changing anything, know what you have:
# Check the log driver for a running containerdocker inspect --format='{{.HostConfig.LogConfig.Type}}' mycontainer
# Check log optionsdocker inspect --format='{{.HostConfig.LogConfig.Config}}' mycontainerIf it returns json-file with no options, you’re running unbounded logging.
The Real Solution: Ship Logs Elsewhere
Honestly, if you’re serious about logging, get them off the local filesystem. Use a logging driver that sends logs to stdout (that’s how container orchestration expects it) or ship to Splunk, ELK, Datadog, CloudWatch, or Grafana Loki.
For Loki (free, self-hosted):
services: app: image: myapp:latest logging: driver: loki options: loki-url: "http://loki:3100/loki/api/v1/push"But that’s a bigger conversation. For now, rotate your logs, check your disk, and sleep better at night.
Your 2 AM self will thank you.