Skip to content
Go back

Why Your Docker Logs Are Eating Your Disk

By SumGuy 5 min read
Why Your Docker Logs Are Eating Your Disk

You’ve been running containers for a year. Everything’s fine. Then one day your disk is 95% full and you’re panicking at 2 AM trying to figure out what happened.

Here’s the thing: Docker’s default logging driver writes JSON logs directly to disk, and it doesn’t rotate them. Not by default. Not ever, unless you tell it to. That container that spams debug logs? That’s a multi-gigabyte file sitting on your filesystem right now, silently growing.

The Problem

When you run docker logs mycontainer, you’re reading from /var/lib/docker/containers/<container-id>/<container-id>-json.log. That file has no size limit. No rotation. Just infinite growth until your disk screams.

Run this on any Docker host that’s been running for a while:

Terminal window
$ du -sh /var/lib/docker/containers/*/
18G /var/lib/docker/containers/abc123def456/
2.1G /var/lib/docker/containers/xyz789abc123/
...

Yeah. That’s your problem.

The Solution: daemon.json

The nuclear option is to set log rotation globally in Docker’s daemon config. This applies to all new containers.

Edit /etc/docker/daemon.json:

/etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "3"
}
}

What this does:

After editing, restart Docker:

Terminal window
sudo systemctl restart docker

This applies to containers created after the restart. Existing containers keep their old unbounded logs. You’ll need to either restart them or prune manually.

Per-Container Override

You can override the global setting per container. In docker run:

Terminal window
docker run -d \
--name chatty-app \
--log-driver json-file \
--log-opt max-size=50m \
--log-opt max-file=2 \
myimage:latest

Or in docker-compose:

docker-compose.yml
services:
web:
image: nginx:latest
logging:
driver: "json-file"
options:
max-size: "50m"
max-file: "2"

Check Current Disk Usage

See how much space your logs are actually using:

Terminal window
$ find /var/lib/docker/containers -name '*.log' -exec du -h {} + | sort -hr | head -20
18G /var/lib/docker/containers/abc123/abc123-json.log
2.1G /var/lib/docker/containers/xyz789/xyz789-json.log
...

Or a cleaner summary:

Terminal window
du -sh /var/lib/docker/containers/*/ | sort -hr

Clean Up Old Logs

Once you’ve set rotation, you can safely clean up the bloated files. Docker gives you a built-in command:

Terminal window
docker system prune

This removes stopped containers, unused images, dangling volumes, and log files for deleted containers. Be careful — if you have stopped containers you want to keep, use:

Terminal window
docker container prune

For surgical removal of logs from a specific container (without deleting the container):

Terminal window
truncate -s 0 /var/lib/docker/containers/<container-id>/<container-id>-json.log

This clears the file without restarting the container.

Log Rotation is Happening But…

If you set max-size: 100m and the file is still growing past 100MB, Docker is rotating — but you might not notice because:

  1. The rotated files are still on disk (they’re just .1, .2, etc.)
  2. A single burst of logs can exceed max-size before rotation kicks in
  3. You’re looking at total disk usage across many containers

Check for backup logs:

Terminal window
ls -lah /var/lib/docker/containers/*/

You’ll see *-json.log, *-json.log.1, *-json.log.2, etc.

Alternative: journald Driver

If you’re on a systemd-based host, swap the log driver entirely to journald. Logs go into the journal, which handles rotation automatically via journald.conf:

/etc/docker/daemon.json
{
"log-driver": "journald"
}

Read them like any other systemd log:

Terminal window
journalctl CONTAINER_NAME=mycontainer -f

The downside: docker logs stops working. Decide what trade-off you can live with.

Set Sane Defaults Before You Need Them

The right time to configure log rotation is before your disk fills up, not after. If you’re setting up a new Docker host, do this on day one:

Terminal window
# Check if daemon.json already exists
cat /etc/docker/daemon.json 2>/dev/null || echo "File not found — safe to create"
# Add rotation config
sudo tee /etc/docker/daemon.json > /dev/null << 'EOF'
{
"log-driver": "json-file",
"log-opts": {
"max-size": "50m",
"max-file": "5"
}
}
EOF
sudo systemctl restart docker

50MB × 5 files = 250MB maximum per container. Reasonable for most workloads. Adjust max-size up for verbose apps, down for quiet ones.

Audit Your Current Setup

Before changing anything, know what you have:

Terminal window
# Check the log driver for a running container
docker inspect --format='{{.HostConfig.LogConfig.Type}}' mycontainer
# Check log options
docker inspect --format='{{.HostConfig.LogConfig.Config}}' mycontainer

If it returns json-file with no options, you’re running unbounded logging.

The Real Solution: Ship Logs Elsewhere

Honestly, if you’re serious about logging, get them off the local filesystem. Use a logging driver that sends logs to stdout (that’s how container orchestration expects it) or ship to Splunk, ELK, Datadog, CloudWatch, or Grafana Loki.

For Loki (free, self-hosted):

docker-compose.yml
services:
app:
image: myapp:latest
logging:
driver: loki
options:
loki-url: "http://loki:3100/loki/api/v1/push"

But that’s a bigger conversation. For now, rotate your logs, check your disk, and sleep better at night.

Your 2 AM self will thank you.


Share this post on:

Send a Webmention

Written about this post on your own site? Send a webmention and it may appear here.


Previous Post
Linux File Descriptor Limits: When 1024 Isn't Enough
Next Post
The `at` Command: One-Time Scheduled Tasks in Linux

Related Posts