The 3 AM Wake-Up Call
It’s Friday night. You update your self-hosted app container. Everything seems fine. Saturday morning, your database is gone. Your config files vanished. You’re sitting there, coffee in hand, wondering what the hell happened.
Here’s the problem: you didn’t mount a volume. Or worse, you mounted a volume but the permissions are wrong. Your container is running as one user, the host filesystem is owned by another, and Docker is quietly failing to persist anything.
Why Containers Lose Data
Containers are designed to be disposable. When you stop and remove a container, anything written inside dies with it. The container filesystem is read-write, but it’s ephemeral. Think of it like writing on a whiteboard inside a box — when you destroy the box, the whiteboard goes with it.
To keep data alive across container restarts, you need a volume — a bridge between the container’s internal filesystem and the host machine.
The Rookie Mistake
You’re running Postgres in Docker:
docker run -d \ --name postgres \ -e POSTGRES_PASSWORD=secret \ postgres:15This works. You create databases, everything’s fine. You restart the container.
Data’s still there! “Great,” you think.
Now you update to Postgres 16:
docker run -d \ --name postgres \ -e POSTGRES_PASSWORD=secret \ postgres:16Wait, you got cute and removed the old container first:
docker rm postgresdocker run -d --name postgres -e POSTGRES_PASSWORD=secret postgres:16New container starts up. Initializes an empty database. Your data is gone.
The Fix: Named Volumes
Use a named volume:
docker volume create postgres-data
docker run -d \ --name postgres \ -v postgres-data:/var/lib/postgresql/data \ -e POSTGRES_PASSWORD=secret \ postgres:15Now your data lives in postgres-data, which persists even after you nuke the container:
docker stop postgresdocker rm postgres
docker run -d \ --name postgres \ -v postgres-data:/var/lib/postgresql/data \ -e POSTGRES_PASSWORD=secret \ postgres:16The new container mounts the same volume. Data’s still there.
The Permissions Gotcha
But here’s where it gets weird. Say you’re running Docker as root (not great, but it happens). Your data gets written as root. Now you restart the container as a different user — maybe you’ve got a custom entrypoint that drops privileges.
Container can’t read the data because it’s owned by root.
Docker volumes created with docker volume create live in /var/lib/docker/volumes/<name>/_data/ on the host, and Docker handles permissions transparently. Usually this works. But if you’re mounting host paths (bind mounts), you’re on your own:
# Don't do this without understanding permissionsdocker run -d \ -v /home/user/postgres-data:/var/lib/postgresql/data \ postgres:15If the container expects to write as user postgres (UID 999) but /home/user/postgres-data is owned by your user (UID 1000), writes fail silently.
Fix:
# Create the dir, set permissionsmkdir -p /home/user/postgres-datachown 999:999 /home/user/postgres-datachmod 750 /home/user/postgres-data
docker run -d \ -v /home/user/postgres-data:/var/lib/postgresql/data \ postgres:15Docker Compose Version
Much cleaner with Compose:
version: '3.8'services: postgres: image: postgres:15 environment: POSTGRES_PASSWORD: secret volumes: - postgres-data:/var/lib/postgresql/data
volumes: postgres-data: driver: localThat’s it. Compose handles the rest.
How to Check If You’re Safe
Run this:
docker inspect <container-id> --format='{{json .Mounts}}'You should see something like:
[ { "Type": "volume", "Name": "postgres-data", "Source": "/var/lib/docker/volumes/postgres-data/_data", "Destination": "/var/lib/postgresql/data", "Driver": "local", "RW": true }]If Mounts is an empty array [], you’ve got no volume. You’re running on ephemeral storage. One restart and it’s gone.
The Nuclear Option: Check Your Data
Before you blow up a container, verify your data’s actually persisting:
docker volume lsdocker volume inspect postgres-dataSee a Mountpoint? That’s your data on the host. You can even browse it directly:
sudo ls /var/lib/docker/volumes/postgres-data/_data/If that directory is empty or doesn’t exist, you know you’re in trouble.
Protecting Volumes From Accidental Deletion
docker-compose down removes containers and networks. docker-compose down -v removes volumes too. You don’t want to run -v by accident.
Add volume protection in your Compose file:
volumes: postgres-data: external: true # won't be created or deleted by ComposeWith external: true, Compose refuses to create or delete the volume — you have to manage it manually. This means docker-compose down -v won’t touch it. The tradeoff: you need to create the volume beforehand:
docker volume create postgres-datadocker-compose up -dBackup Before You Upgrade
Before any major container update, dump your data:
# Postgresdocker exec postgres pg_dump -U postgres mydb > backup_$(date +%Y%m%d).sql
# SQLite (file-based)docker cp myapp:/data/app.db ./app.db.backup
# Generic: copy from volumedocker run --rm -v postgres-data:/data -v $(pwd):/backup alpine tar czf /backup/pg-data.tar.gz -C /data .That last one is useful for any volume — it spins up an Alpine container, mounts your volume and a local dir, and tars it up.
Bottom line: volumes aren’t optional. They’re not advanced Docker stuff. They’re the difference between “my data persists” and “my data is a whiteboard in a box that just got thrown away.”