Skip to content
SumGuy's Ramblings
Go back

Docker Security Hardening: 15 Things You're Doing Wrong Right Now

Look, I get it. You docker run something, it works, you move on. You’ve got features to ship and a backlog that grows faster than your coffee consumption. Security? That’s Future You’s problem.

Well, congratulations — Future You just arrived, and they’re not happy.

Docker containers have this sneaky reputation for being “secure by default.” And sure, compared to running everything directly on bare metal with root access and a prayer, containers are an improvement. But “better than terrible” is a pretty low bar. The reality is that most Docker deployments have holes you could drive a truck through, and attackers know exactly where to look.

Let’s fix that. Here are 15 things you’re probably doing wrong right now, and more importantly, how to stop doing them.

1. Running Containers as Root

This is the big one. The granddaddy of Docker security sins. By default, processes inside a Docker container run as root. Not “sort of root” or “pretend root” — actual UID 0 root. If an attacker escapes your container (and container escapes are a thing), they land on your host as root. Game over.

The fix:

# In your Dockerfile
RUN groupadd -r appuser && useradd -r -g appuser appuser
USER appuser

Or at runtime:

docker run --user 1000:1000 myimage

Every single Dockerfile you write should have a USER directive. No exceptions. If your app “needs” root, it probably doesn’t — it just needs specific capabilities, which brings us to…

2. Running in Privileged Mode

docker run --privileged is the Docker equivalent of giving your house keys to a stranger and saying “make yourself at home.” Privileged mode gives the container nearly full access to the host system, including all devices, all capabilities, and the ability to modify the host kernel.

I’ve seen people use --privileged because they needed access to one specific device. That’s like demolishing a wall because you need a window.

The fix:

# Instead of --privileged, grant only what you need
docker run --device /dev/snd myimage          # specific device access
docker run --cap-add SYS_PTRACE myimage       # specific capability

Never use --privileged in production. If something in your CI/CD pipeline “requires” it, that’s a red flag, not a justification.

3. Exposing the Docker Socket

Mounting /var/run/docker.sock into a container is basically handing over the keys to your entire Docker host. Any container with access to the Docker socket can create new containers, kill existing ones, and — here’s the fun part — spin up a privileged container that mounts the host filesystem. Full compromise in about three commands.

The fix:

# Don't do this unless you absolutely must
# docker run -v /var/run/docker.sock:/var/run/docker.sock myimage

# If you genuinely need Docker-in-Docker, use rootless mode or
# a purpose-built tool like Sysbox

If you’re running monitoring tools that claim they need socket access, look for alternatives that use the Docker API remotely with TLS authentication instead.

4. Skipping Image Vulnerability Scanning

You wouldn’t deploy code without testing it (right?… RIGHT?), so why are you deploying container images without scanning them for known vulnerabilities? That node:latest base image you pulled last month? It probably has a CVE list longer than your sprint backlog.

The fix:

# Install and run Trivy -- it's free and excellent
trivy image myapp:latest

# Better yet, put it in your CI pipeline
trivy image --exit-code 1 --severity HIGH,CRITICAL myapp:latest

Trivy, Grype, and Snyk Container are all solid options. Pick one. Put it in your pipeline. Fail the build on critical vulnerabilities. Your future self will thank you.

5. Ignoring Read-Only Filesystems

Most containers don’t need to write to their filesystem at all. The application code is baked into the image, configuration comes from environment variables or mounted configs, and logs go to stdout. So why is your container’s filesystem writable?

A writable filesystem means an attacker who gets code execution can drop malware, modify application binaries, or plant persistence mechanisms. A read-only filesystem makes all of that significantly harder.

The fix:

docker run --read-only myimage

# If your app needs to write to specific directories, use tmpfs
docker run --read-only --tmpfs /tmp --tmpfs /app/cache myimage

In Docker Compose:

services:
  webapp:
    image: myapp:latest
    read_only: true
    tmpfs:
      - /tmp
      - /app/cache

6. Not Using Seccomp and AppArmor Profiles

Seccomp (Secure Computing Mode) restricts which system calls a container can make. AppArmor restricts what files and capabilities a process can access. Docker ships with default profiles for both, but the defaults are intentionally permissive to avoid breaking things.

Translation: the defaults prioritize “it works” over “it’s secure.”

The fix:

# Use Docker's default seccomp profile explicitly (better than nothing)
docker run --security-opt seccomp=/path/to/custom-seccomp.json myimage

# Apply a custom AppArmor profile
docker run --security-opt apparmor=my-custom-profile myimage

For a tighter seccomp profile, start with Docker’s default and remove syscalls your app doesn’t need. Tools like strace can help you figure out which syscalls your application actually uses. Yes, this takes effort. So does recovering from a breach.

7. Running Flat Networks with No Segmentation

By default, all containers on the same Docker network can talk to each other. Your frontend can talk to your database. Your database can talk to your Redis cache. Everything can talk to everything. This is a lateral movement paradise for attackers.

The fix:

# Create separate networks for different tiers
docker network create frontend-net
docker network create backend-net

# Frontend only gets frontend network
docker run --network frontend-net nginx

# API gets both (it bridges the two tiers)
docker run --network frontend-net --network backend-net api-server

# Database only gets backend network
docker run --network backend-net postgres

In Compose:

networks:
  frontend:
  backend:
    internal: true  # No external access at all

services:
  web:
    networks: [frontend]
  api:
    networks: [frontend, backend]
  db:
    networks: [backend]

The internal: true flag is your friend. Use it for any network that doesn’t need to reach the internet.

8. Storing Secrets in Environment Variables (or Worse, in Images)

I need you to sit down for this one. Environment variables are not a secrets management solution. They show up in docker inspect. They show up in /proc/1/environ inside the container. They show up in your orchestrator’s API. They are visible to anyone with access to the Docker daemon.

And if you’ve ever done ENV DATABASE_PASSWORD=hunter2 in a Dockerfile… that password is now baked into every layer of your image, forever, even if you delete it in a later layer. Docker image layers are like the internet — they never forget.

The fix:

# Use Docker secrets (Swarm mode)
echo "my-secret-password" | docker secret create db_password -
docker service create --secret db_password myapp

# Or mount secrets as files with proper permissions
docker run -v /secure/path/db_password:/run/secrets/db_password:ro myimage

For Kubernetes, use actual Secrets (or better yet, external secret stores like Vault, AWS Secrets Manager, or SOPS). For local development, Docker Compose secrets work fine:

secrets:
  db_password:
    file: ./secrets/db_password.txt

services:
  app:
    secrets:
      - db_password

Your app reads from /run/secrets/db_password. Clean, secure, and it doesn’t leak into inspect output.

9. Not Signing or Verifying Images

How do you know that mycompany/webapp:latest image you just pulled is actually the one your team built? Without image signing and verification, you don’t. You’re trusting that nobody tampered with your registry, your CI pipeline, or the network between them.

The fix:

# Enable Docker Content Trust
export DOCKER_CONTENT_TRUST=1

# Now pushes will be signed and pulls will be verified
docker push mycompany/webapp:latest
docker pull mycompany/webapp:latest  # Fails if not signed

# For more advanced signing, use cosign (from Sigstore)
cosign sign --key cosign.key myregistry/myimage:latest
cosign verify --key cosign.pub myregistry/myimage:latest

At minimum, enable Docker Content Trust in your production environments. It takes five minutes and prevents an entire class of supply chain attacks.

10. Never Running Docker Bench Security

Docker Bench for Security is a free, open-source script that checks your Docker installation against the CIS Docker Benchmark. It examines host configuration, Docker daemon settings, container runtime parameters, and more. It literally tells you what you’re doing wrong.

And yet almost nobody runs it.

The fix:

# It's one command. Seriously.
docker run --rm --net host --pid host --userns host --cap-add audit_control \
  -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
  -v /var/lib:/var/lib:ro \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  -v /usr/lib/systemd:/usr/lib/systemd:ro \
  -v /etc:/etc:ro \
  docker/docker-bench-security

Run it. Read the output. Fix the warnings. Run it again. Make it part of your regular security audit process. The output is color-coded and human-readable. There’s no excuse not to use this.

11. Forgetting the no-new-privileges Flag

Linux has this fun feature where a process can gain additional privileges after it starts, via mechanisms like setuid binaries or capabilities inheritance. Inside a container, this means a compromised process could escalate from your unprivileged app user to root.

The fix:

docker run --security-opt no-new-privileges myimage

In Compose:

services:
  app:
    security_opt:
      - no-new-privileges:true

This should be on by default for everything. It prevents any child process from gaining more privileges than its parent. There’s almost never a legitimate reason to leave this off.

12. Ignoring User Namespaces

User namespaces remap the UID inside the container to a different (unprivileged) UID on the host. So even if a process runs as root (UID 0) inside the container, it maps to, say, UID 100000 on the host — which has zero special privileges.

This is your safety net for when things go wrong. Container escape plus user namespaces equals “congratulations, you escaped to an unprivileged account that can’t do anything interesting.”

The fix:

# Configure the Docker daemon (/etc/docker/daemon.json)
{
  "userns-remap": "default"
}

# Restart Docker
sudo systemctl restart docker

Docker will automatically create a dockremap user and set up subordinate UID/GID ranges. Some workloads need adjustments (especially those mounting host volumes), but the security benefit is substantial.

13. Setting No Resource Limits

A container with no resource limits can consume all available CPU, memory, and disk I/O on the host. This isn’t just a performance problem — it’s a security problem. A denial-of-service attack against one container can take down every other container on the same host.

It also means a cryptominer that sneaks into one of your containers gets to use 100% of your expensive cloud compute. Fun.

The fix:

docker run \
  --memory 512m \
  --memory-swap 512m \
  --cpus 1.0 \
  --pids-limit 100 \
  --ulimit nofile=1024:2048 \
  myimage

In Compose:

services:
  app:
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 512M
        reservations:
          cpus: '0.25'
          memory: 128M
    pids_limit: 100

The --pids-limit flag is especially important — it prevents fork bombs. Without it, a single :(){ :|:& };: can create enough processes to crash your host.

14. Committing .env Files into Images

This is the cousin of the secrets-in-environment-variables problem. Your .env file contains database URLs, API keys, third-party service credentials, and probably your Netflix password. (Kidding. Mostly.) If it ends up in your Docker image, all of those secrets are accessible to anyone who can pull that image.

The fix:

Create a .dockerignore file in your project root:

# .dockerignore
.env
.env.*
*.pem
*.key
credentials.json
secrets/
.git
node_modules

Then verify:

# Build your image
docker build -t myapp .

# Check that your secrets aren't in there
docker run --rm myapp ls -la /app/.env
# Should return "No such file or directory"

# For extra paranoia, inspect the layers
docker history myapp
dive myapp  # Great tool for exploring image layers

The .dockerignore file works exactly like .gitignore. If you don’t have one, create one right now. I’ll wait.

15. Using Outdated Base Images

That ubuntu:20.04 base image you chose two years ago and never updated? It’s accumulated vulnerabilities like a neglected swimming pool accumulates algae. Every day you don’t update, new CVEs are discovered in the packages baked into that image.

And if you’re using :latest tags, you’re playing a different but equally dangerous game — you have no idea what you’re actually running, and your builds aren’t reproducible.

The fix:

# Pin to specific versions -- not :latest
FROM node:20.11.1-alpine3.19

# Use minimal base images
FROM alpine:3.19.1
FROM gcr.io/distroless/static-debian12
FROM scratch  # The ultimate minimal image

# Multi-stage builds to reduce attack surface
FROM node:20.11.1 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build

FROM gcr.io/distroless/nodejs20-debian12
COPY --from=builder /app/dist /app
CMD ["/app/server.js"]

Set up automated image rebuilds — tools like Dependabot, Renovate, or Watchtower can help. Pin your versions so you know exactly what you’re running, but rebuild regularly so you get security patches.

Distroless images from Google deserve special mention. They contain only your application and its runtime dependencies — no shell, no package manager, no utilities. An attacker who gets code execution can’t even run ls. That’s the energy we want.

Putting It All Together

Here’s what a hardened Docker Compose service looks like when you combine everything:

version: '3.8'

services:
  webapp:
    image: mycompany/webapp:1.2.3  # Pinned version, signed
    read_only: true
    user: "1000:1000"
    security_opt:
      - no-new-privileges:true
      - seccomp:./seccomp-profile.json
    tmpfs:
      - /tmp:noexec,nosuid,size=64M
    networks:
      - frontend
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 512M
    pids_limit: 100
    secrets:
      - db_password
      - api_key

networks:
  frontend:
  backend:
    internal: true

secrets:
  db_password:
    external: true
  api_key:
    external: true

Compare that to docker run -d --privileged -v /:/host myapp:latest and tell me which one you’d rather defend in a security audit.

The Checklist

Because I know you’re going to bookmark this and forget about it, here’s the quick-hit version:

  1. Never run as root — use USER in Dockerfiles
  2. Never use --privileged — grant specific capabilities instead
  3. Don’t mount the Docker socket — find alternatives
  4. Scan images with Trivy or similar — fail builds on critical CVEs
  5. Use --read-only filesystems — mount tmpfs where needed
  6. Apply custom seccomp/AppArmor profiles — defaults are too loose
  7. Segment networks — not everything needs to talk to everything
  8. Use Docker secrets or Vault — not environment variables
  9. Sign and verify images — enable Docker Content Trust
  10. Run Docker Bench Security — regularly
  11. Set no-new-privileges — on everything
  12. Enable user namespaces — remap container root to unprivileged host user
  13. Set resource limits — CPU, memory, PIDs
  14. Use .dockerignore — keep secrets out of images
  15. Update base images — pin versions, rebuild regularly, go distroless

You don’t have to do all 15 today. But you should do all 15 eventually. Start with the ones that scare you the most (probably 1, 2, 3, and 8), and work your way down the list.

Your containers will still work. Your deploys will still ship. But now, when some script kiddie comes knocking, they’ll find a locked door instead of a welcome mat.

And that? That’s worth the effort.


Share this post on:

Previous Post
Linux Capabilities: Drop Root Without Breaking Everything
Next Post
BookStack vs Wiki.js: Picking Your Self-Hosted Documentation Platform