Skip to content
SumGuy's Ramblings
Go back

Docker Logging: From "Where Did My Logs Go?" to Centralized Bliss

So you spun up a few Docker containers. Everything’s working. Life is good. Then something breaks at 2 AM and you think, “I’ll just check the logs.” Thirty minutes later, you’re questioning every life decision that led you to this moment, because your logs are either gone, incomprehensible, or scattered across seventeen different places.

Welcome to the wonderful world of Docker logging.

Don’t worry — by the end of this article, you’ll go from “where did my logs go?” to having a centralized, searchable, rotated, and (dare I say) pleasant logging setup. Let’s get into it.

The Basics: docker logs and How Containers Actually Log

Before we get fancy, let’s understand the fundamentals. Docker containers, by default, capture anything your application writes to stdout and stderr. That’s it. There’s no magic log file inside the container (well, unless your app creates one, but we’ll get to that antipattern later).

To see these logs, you use the aptly named command:

docker logs <container_name_or_id>

Some handy flags you’ll use constantly:

# Follow logs in real-time (like tail -f)
docker logs -f my-container

# Show last 100 lines
docker logs --tail 100 my-container

# Show logs since a specific time
docker logs --since 2024-01-15T10:00:00 my-container

# Show timestamps
docker logs -t my-container

# Combine them -- last 50 lines, with timestamps, following
docker logs -f -t --tail 50 my-container

This works perfectly when you have one or two containers running on a single machine. But the moment you scale beyond that — or the moment your container restarts and you realize the old logs might be gone — things get spicy.

Pro tip: If your application logs to a file inside the container instead of stdout/stderr, docker logs will show you absolutely nothing. This is the number one “where did my logs go?” moment for Docker beginners. The fix? Configure your app to log to stdout. In most frameworks, this is a one-line config change. Do it. Future-you will thank present-you.

Logging Drivers: The Engine Behind the Curtain

Here’s where Docker gets interesting. Behind every docker logs command is a logging driver — the mechanism Docker uses to handle your container’s output. Think of it like choosing where your mail gets delivered. Same letters, different mailbox.

Docker supports several logging drivers, and choosing the right one matters more than you think.

json-file (The Default)

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

This is what Docker uses out of the box. It writes logs as JSON to files on disk, typically found at /var/lib/docker/containers/<container-id>/<container-id>-json.log.

Pros: Simple, works with docker logs, human-readable. Cons: No built-in centralization, can eat your disk alive if you don’t configure rotation (more on that in a minute).

syslog

{
  "log-driver": "syslog",
  "log-opts": {
    "syslog-address": "udp://logs.example.com:514",
    "syslog-facility": "daemon",
    "tag": "{{.Name}}"
  }
}

Sends logs to a syslog server. If your organization already has a syslog infrastructure, this is a natural fit. It speaks the language your ops team already knows.

Pros: Integrates with existing syslog infrastructure, well-understood protocol. Cons: docker logs command stops working (yep, really), UDP can lose messages, limited structured data support.

fluentd

{
  "log-driver": "fluentd",
  "log-opts": {
    "fluentd-address": "localhost:24224",
    "tag": "docker.{{.Name}}"
  }
}

Routes logs to a Fluentd or Fluent Bit collector. This is where things start getting professional. Fluentd can parse, filter, transform, and ship your logs basically anywhere.

Pros: Extremely flexible, supports hundreds of output plugins, great for centralized setups. Cons: Requires running a Fluentd/Fluent Bit instance, slightly more complex setup.

Other Notable Drivers

Important gotcha: When you switch away from json-file or local, the docker logs command stops working for most drivers. This catches people off guard constantly. Plan accordingly.

Log Rotation: Stop Your Disk From Committing Seppuku

Here’s a horror story I’ve seen play out more times than I’d like to admit: a production server runs out of disk space. Everything grinds to a halt. Databases crash. Alerts fire. Someone SSHs in and discovers a single Docker container has been writing logs for six months straight, and there’s now a 47GB JSON file sitting in /var/lib/docker/containers/.

Don’t be that person.

Configuring Log Rotation

For the default json-file driver, add rotation settings. You can do this per-container or globally.

Per-container (in docker run):

docker run \
  --log-driver json-file \
  --log-opt max-size=10m \
  --log-opt max-file=5 \
  my-app:latest

Globally (in /etc/docker/daemon.json):

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "5",
    "compress": "true"
  }
}

After editing daemon.json, restart Docker:

sudo systemctl restart docker

This gives you 5 rotated log files, each max 10MB, compressed. That’s a maximum of ~50MB per container instead of infinity-and-beyond.

What About the local Driver?

The local driver handles rotation by default and is more efficient with disk space:

{
  "log-driver": "local",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

It uses a compressed internal format, so your 10MB files actually hold more log data than the equivalent json-file configuration. If you’re running a single-host setup and just want logs to behave themselves, local is a solid pick.

Sizing Guidelines

Here’s a rough cheat sheet for rotation settings:

Environmentmax-sizemax-fileNotes
Dev/local5m2Minimal, just enough to debug
Staging10m3Moderate, mirrors prod-ish behavior
Production (centralized)10m5Buffer while logs ship to central store
Production (no centralization)50m10You’ll want more local history

Docker Compose Logging Configuration

If you’re using Docker Compose (and you probably should be for anything beyond a single container), logging configuration lives right in your docker-compose.yml:

version: "3.8"

services:
  web:
    image: nginx:latest
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "5"
        tag: "{{.Name}}"

  api:
    image: my-api:latest
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "5"
        tag: "{{.Name}}"

  worker:
    image: my-worker:latest
    logging:
      driver: fluentd
      options:
        fluentd-address: "localhost:24224"
        tag: "app.worker"

Notice you can mix and match drivers per service. Maybe your web proxy uses json-file because you want quick docker logs access, while your worker ships to Fluentd for centralized processing. Totally valid.

YAML Anchors for DRY Logging Config

If you’re applying the same logging config to multiple services (you usually are), use YAML anchors to keep things clean:

x-logging: &default-logging
  driver: json-file
  options:
    max-size: "10m"
    max-file: "5"
    tag: "{{.Name}}"

services:
  web:
    image: nginx:latest
    logging: *default-logging

  api:
    image: my-api:latest
    logging: *default-logging

  worker:
    image: my-worker:latest
    logging: *default-logging

Chef’s kiss. Clean, maintainable, and you only have to change it in one place.

Centralized Logging with Loki + Grafana

Alright, let’s graduate from “I can see my logs on one machine” to “I can see ALL my logs from EVERYWHERE in one beautiful dashboard.” Enter the Loki + Grafana stack.

Why Loki?

If you’ve heard of the ELK stack (Elasticsearch, Logstash, Kibana), Loki is like its lean, mean, resource-efficient cousin. Created by Grafana Labs, Loki was specifically designed for log aggregation and pairs perfectly with Grafana for visualization.

Key differences from ELK:

Think of it this way: Elasticsearch is a library that catalogs every word in every book. Loki is a library that catalogs book titles and authors, then lets you read the full text when you need it. Way less overhead for most use cases.

Setting Up the Stack

Here’s a complete Docker Compose setup for Loki + Grafana + Promtail (Loki’s log shipping agent):

version: "3.8"

services:
  loki:
    image: grafana/loki:2.9.0
    ports:
      - "3100:3100"
    volumes:
      - ./loki-config.yml:/etc/loki/local-config.yaml
      - loki-data:/loki
    command: -config.file=/etc/loki/local-config.yaml

  promtail:
    image: grafana/promtail:2.9.0
    volumes:
      - ./promtail-config.yml:/etc/promtail/config.yml
      - /var/log:/var/log
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/run/docker.sock:/var/run/docker.sock
    command: -config.file=/etc/promtail/config.yml
    depends_on:
      - loki

  grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    volumes:
      - grafana-data:/var/lib/grafana
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=changeme
    depends_on:
      - loki

volumes:
  loki-data:
  grafana-data:

Loki Configuration

Create loki-config.yml:

auth_enabled: false

server:
  http_listen_port: 3100

common:
  path_prefix: /loki
  storage:
    filesystem:
      chunks_directory: /loki/chunks
      rules_directory: /loki/rules
  replication_factor: 1
  ring:
    kvstore:
      store: inmemory

schema_config:
  configs:
    - from: 2020-10-24
      store: tsdb
      object_store: filesystem
      schema: v13
      index:
        prefix: index_
        period: 24h

limits_config:
  reject_old_samples: true
  reject_old_samples_max_age: 168h  # 7 days
  max_query_length: 721h

storage_config:
  filesystem:
    directory: /loki/storage

compactor:
  working_directory: /loki/compactor

Promtail Configuration

Create promtail-config.yml:

server:
  http_listen_port: 9080
  grpc_listen_port: 0

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://loki:3100/loki/api/v1/push

scrape_configs:
  - job_name: docker
    docker_sd_configs:
      - host: unix:///var/run/docker.sock
        refresh_interval: 5s
    relabel_configs:
      - source_labels: ['__meta_docker_container_name']
        regex: '/(.*)'
        target_label: 'container'
      - source_labels: ['__meta_docker_container_log_stream']
        target_label: 'stream'
      - source_labels: ['__meta_docker_compose_service']
        target_label: 'service'

Using the Loki Docker Plugin (Alternative to Promtail)

If you’d rather skip Promtail entirely, you can install the Loki Docker logging driver plugin and ship logs directly from Docker:

docker plugin install grafana/loki-docker-driver:latest --alias loki --grant-all-permissions

Then configure it in daemon.json:

{
  "log-driver": "loki",
  "log-opts": {
    "loki-url": "http://localhost:3100/loki/api/v1/push",
    "loki-batch-size": "400",
    "loki-retries": "2",
    "loki-max-backoff": "800ms",
    "loki-timeout": "1s"
  }
}

Or per-service in Docker Compose:

services:
  my-app:
    image: my-app:latest
    logging:
      driver: loki
      options:
        loki-url: "http://localhost:3100/loki/api/v1/push"
        loki-batch-size: "400"
        loki-retries: "2"

Querying with LogQL

Once your logs are flowing into Loki, you can query them in Grafana using LogQL. Here are some examples to get you started:

# All logs from a specific container
{container="my-api"}

# Filter by log content
{container="my-api"} |= "error"

# Regex filter
{container="my-api"} |~ "status=[45]\\d{2}"

# Parse JSON logs and filter by field
{container="my-api"} | json | level="error"

# Count errors per minute
count_over_time({container="my-api"} |= "error" [1m])

# Top 5 containers by error count
topk(5, count_over_time({service=~".+"} |= "error" [5m]))

LogQL is genuinely powerful once you get the hang of it. It’s like grep learned kung fu and got a visualization degree.

Fluentd and Fluent Bit Setup

Fluentd (and its lighter sibling, Fluent Bit) are the Swiss Army knives of log processing. They collect, parse, filter, and ship logs to basically any destination you can think of.

Fluentd vs. Fluent Bit: Which One?

FeatureFluentdFluent Bit
LanguageRuby + CC
Memory footprint~40MB~450KB
Plugin ecosystemHuge (900+)Smaller but growing
Best forComplex processing, many destinationsEdge collection, resource-constrained environments
ConfigurationMore flexibleSimpler

Rule of thumb: Use Fluent Bit on each host to collect and do basic filtering, then ship to Fluentd for complex processing. Or just use Fluent Bit for everything if your pipeline is straightforward.

Fluent Bit Docker Setup

version: "3.8"

services:
  fluent-bit:
    image: fluent/fluent-bit:latest
    volumes:
      - ./fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
    ports:
      - "24224:24224"
      - "24224:24224/udp"

  app:
    image: my-app:latest
    logging:
      driver: fluentd
      options:
        fluentd-address: "localhost:24224"
        tag: "app.{{.Name}}"

Fluent Bit Configuration

Create fluent-bit.conf:

[SERVICE]
    Flush        1
    Log_Level    info
    Parsers_File parsers.conf

[INPUT]
    Name         forward
    Listen       0.0.0.0
    Port         24224

[FILTER]
    Name         parser
    Match        app.*
    Key_Name     log
    Parser       json
    Reserve_Data On

[FILTER]
    Name         modify
    Match        *
    Add          hostname ${HOSTNAME}
    Add          environment production

[FILTER]
    Name         grep
    Match        *
    Exclude      log healthcheck

[OUTPUT]
    Name         loki
    Match        *
    Host         loki
    Port         3100
    Labels       job=fluent-bit, app=$TAG
    Auto_Kubernetes_Labels off

[OUTPUT]
    Name         stdout
    Match        *
    Format       json_lines

This config does several things:

  1. Accepts logs from Docker containers via the forward protocol
  2. Parses JSON log messages into structured fields
  3. Adds hostname and environment metadata
  4. Filters out healthcheck noise (because nobody needs 10,000 “GET /health 200” lines per hour)
  5. Ships to Loki for centralized storage
  6. Also prints to stdout for debugging

Fluentd Configuration (Full Setup)

If you need the full power of Fluentd:

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>

<filter app.**>
  @type parser
  key_name log
  reserve_data true
  <parse>
    @type json
  </parse>
</filter>

<filter **>
  @type record_transformer
  <record>
    hostname "#{Socket.gethostname}"
    environment "#{ENV['ENVIRONMENT'] || 'development'}"
  </record>
</filter>

# Remove noisy health check logs
<filter **>
  @type grep
  <exclude>
    key log
    pattern /healthcheck|health_check|GET \/health/
  </exclude>
</filter>

# Route errors to a separate output for alerting
<match app.**.error>
  @type copy
  <store>
    @type loki
    url "http://loki:3100"
    <label>
      job fluentd
      level error
    </label>
  </store>
  <store>
    @type slack
    webhook_url "https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
    channel "#alerts"
    username "Log Alert"
    message "%s"
  </store>
</match>

<match **>
  @type loki
  url "http://loki:3100"
  <label>
    job fluentd
  </label>
  <buffer>
    @type file
    path /fluentd/buffer
    flush_interval 5s
    chunk_limit_size 2M
    retry_max_interval 30
    retry_forever true
  </buffer>
</match>

Log Filtering: Separating Signal from Noise

Once you have centralized logging, you’ll quickly realize that 90% of your logs are noise. Health checks, routine operations, debug messages in production — it all adds up. Here’s how to tame it.

At the Application Level

The best filtering happens before logs even leave your app. Use appropriate log levels:

FATAL  -> Something is catastrophically broken
ERROR  -> Something failed, needs attention
WARN   -> Something unexpected, but handled
INFO   -> Normal operations worth noting
DEBUG  -> Detailed diagnostic info (never in production)
TRACE  -> Ultra-detailed (definitely never in production)

At the Docker Level

Use the tag option to make logs identifiable:

logging:
  driver: json-file
  options:
    tag: "{{.ImageName}}/{{.Name}}/{{.ID}}"

At the Collector Level (Fluent Bit Example)

# Drop debug logs in production
[FILTER]
    Name    grep
    Match   *
    Exclude level debug

# Only keep errors from noisy services
[FILTER]
    Name    grep
    Match   app.payment-service
    Regex   level (error|fatal)

# Sample verbose logs (keep 1 in 10)
[FILTER]
    Name          throttle
    Match         app.verbose-service
    Rate          10
    Window        300
    Print_Status  true

In LogQL (Grafana)

# Filter out healthchecks and metrics endpoints
{service="api"} != "GET /health" != "GET /metrics"

# Only show errors with stack traces
{service="api"} |= "error" |= "Traceback"

# Parse JSON and filter by response time > 1s
{service="api"} | json | response_time > 1000

Storage Considerations

Logs are deceptively expensive. Not in the “oops I left a GPU instance running” way, but in the slow-drip, “why is our S3 bill $800 this month?” way. Here’s what to think about.

Retention Policies

Set aggressive retention policies. Ask yourself: “When was the last time I needed a log from more than 30 days ago?” For most teams, the answer is never.

In Loki, configure retention in your config:

limits_config:
  retention_period: 720h  # 30 days

compactor:
  working_directory: /loki/compactor
  retention_enabled: true
  retention_delete_delay: 2h
  delete_request_cancel_period: 24h

Storage Tiers

For larger deployments, consider tiered storage:

TierDurationStorageCost
Hot0-7 daysLocal SSD / Fast disk$$$
Warm7-30 daysObject storage (S3/GCS)$$
Cold30-90 daysCheap object storage (S3 Glacier)$
Archive90+ daysOnly if compliance requires itVaries

Loki supports object storage backends natively, making this kind of tiering straightforward.

Estimating Storage Needs

A rough formula:

Daily log volume = (avg log line size) x (lines per second) x 86400

Example:
200 bytes x 100 lines/sec x 86400 = ~1.6 GB/day uncompressed
With Loki compression: ~0.3-0.5 GB/day
30-day retention: ~10-15 GB total

That’s very manageable. But scale that to 50 services each doing 1000 lines/sec and suddenly you’re looking at real storage costs. Plan accordingly.

Compression Matters

Always enable compression. Loki compresses by default, but if you’re using json-file logging driver, add "compress": "true" to your log options. Log data is extremely compressible (often 10:1 or better) because of how repetitive it is.

Putting It All Together: A Production-Ready Stack

Here’s what a solid Docker logging architecture looks like for a small to medium team:

 Containers (stdout/stderr)
         |
         v
   Docker json-file driver (with rotation)
         |
         v
   Promtail / Fluent Bit (collection + basic filtering)
         |
         v
   Loki (storage + indexing)
         |
         v
   Grafana (visualization + alerting)

And here’s the complete Docker Compose that ties it all together:

version: "3.8"

x-logging: &default-logging
  driver: json-file
  options:
    max-size: "10m"
    max-file: "5"
    tag: "{{.Name}}"

services:
  # --- Your Application Services ---
  web:
    image: nginx:latest
    ports:
      - "80:80"
    logging: *default-logging

  api:
    image: my-api:latest
    ports:
      - "8080:8080"
    logging: *default-logging

  # --- Logging Infrastructure ---
  loki:
    image: grafana/loki:2.9.0
    ports:
      - "3100:3100"
    volumes:
      - ./config/loki.yml:/etc/loki/local-config.yaml
      - loki-data:/loki
    command: -config.file=/etc/loki/local-config.yaml
    logging: *default-logging

  promtail:
    image: grafana/promtail:2.9.0
    volumes:
      - ./config/promtail.yml:/etc/promtail/config.yml
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/run/docker.sock:/var/run/docker.sock
    command: -config.file=/etc/promtail/config.yml
    depends_on:
      - loki
    logging: *default-logging

  grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    volumes:
      - grafana-data:/var/lib/grafana
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=changeme
      - GF_INSTALL_PLUGINS=grafana-loki-datasource
    depends_on:
      - loki
    logging: *default-logging

volumes:
  loki-data:
  grafana-data:

Final Thoughts

Docker logging doesn’t have to be painful. Here’s the TL;DR game plan:

  1. Start simple: Use json-file with rotation configured. Always.
  2. Log to stdout: Make sure your apps write to stdout/stderr, not internal files.
  3. Set rotation early: Configure max-size and max-file before you deploy anything. Do this on day one, not after your disk is full.
  4. Centralize when ready: When you have more than a handful of containers, set up Loki + Grafana. The initial time investment pays for itself the first time you need to search logs across multiple services.
  5. Filter aggressively: Don’t ship everything to your central store. Health checks, debug logs, and routine noise don’t need to be there.
  6. Plan for storage: Set retention policies, enable compression, and estimate your costs before they surprise you.

Logging is one of those things that nobody thinks about until everything is on fire. Do yourself a favor and set it up properly now. Your 2 AM self will be grateful.

Now go forth and centralize those logs. Your containers have been screaming into the void long enough.


Share this post on:

Previous Post
Cockpit vs Webmin: Managing Your Linux Server Without the Terminal (Sometimes)
Next Post
NocoDB: Because Airtable Doesn't Need to Know Your Business