Skip to content
Go back

Why Self-Hosted Apps Lose Data After Updates

By SumGuy 5 min read
Why Self-Hosted Apps Lose Data After Updates

The 3 AM Wake-Up Call

It’s Friday night. You update your self-hosted app container. Everything seems fine. Saturday morning, your database is gone. Your config files vanished. You’re sitting there, coffee in hand, wondering what the hell happened.

Here’s the problem: you didn’t mount a volume. Or worse, you mounted a volume but the permissions are wrong. Your container is running as one user, the host filesystem is owned by another, and Docker is quietly failing to persist anything.

Why Containers Lose Data

Containers are designed to be disposable. When you stop and remove a container, anything written inside dies with it. The container filesystem is read-write, but it’s ephemeral. Think of it like writing on a whiteboard inside a box — when you destroy the box, the whiteboard goes with it.

To keep data alive across container restarts, you need a volume — a bridge between the container’s internal filesystem and the host machine.

The Rookie Mistake

You’re running Postgres in Docker:

Terminal window
docker run -d \
--name postgres \
-e POSTGRES_PASSWORD=secret \
postgres:15

This works. You create databases, everything’s fine. You restart the container.

Data’s still there! “Great,” you think.

Now you update to Postgres 16:

Terminal window
docker run -d \
--name postgres \
-e POSTGRES_PASSWORD=secret \
postgres:16

Wait, you got cute and removed the old container first:

Terminal window
docker rm postgres
docker run -d --name postgres -e POSTGRES_PASSWORD=secret postgres:16

New container starts up. Initializes an empty database. Your data is gone.

The Fix: Named Volumes

Use a named volume:

Terminal window
docker volume create postgres-data
docker run -d \
--name postgres \
-v postgres-data:/var/lib/postgresql/data \
-e POSTGRES_PASSWORD=secret \
postgres:15

Now your data lives in postgres-data, which persists even after you nuke the container:

Terminal window
docker stop postgres
docker rm postgres
docker run -d \
--name postgres \
-v postgres-data:/var/lib/postgresql/data \
-e POSTGRES_PASSWORD=secret \
postgres:16

The new container mounts the same volume. Data’s still there.

The Permissions Gotcha

But here’s where it gets weird. Say you’re running Docker as root (not great, but it happens). Your data gets written as root. Now you restart the container as a different user — maybe you’ve got a custom entrypoint that drops privileges.

Container can’t read the data because it’s owned by root.

Docker volumes created with docker volume create live in /var/lib/docker/volumes/<name>/_data/ on the host, and Docker handles permissions transparently. Usually this works. But if you’re mounting host paths (bind mounts), you’re on your own:

Terminal window
# Don't do this without understanding permissions
docker run -d \
-v /home/user/postgres-data:/var/lib/postgresql/data \
postgres:15

If the container expects to write as user postgres (UID 999) but /home/user/postgres-data is owned by your user (UID 1000), writes fail silently.

Fix:

Terminal window
# Create the dir, set permissions
mkdir -p /home/user/postgres-data
chown 999:999 /home/user/postgres-data
chmod 750 /home/user/postgres-data
docker run -d \
-v /home/user/postgres-data:/var/lib/postgresql/data \
postgres:15

Docker Compose Version

Much cleaner with Compose:

version: '3.8'
services:
postgres:
image: postgres:15
environment:
POSTGRES_PASSWORD: secret
volumes:
- postgres-data:/var/lib/postgresql/data
volumes:
postgres-data:
driver: local

That’s it. Compose handles the rest.

How to Check If You’re Safe

Run this:

Terminal window
docker inspect <container-id> --format='{{json .Mounts}}'

You should see something like:

[
{
"Type": "volume",
"Name": "postgres-data",
"Source": "/var/lib/docker/volumes/postgres-data/_data",
"Destination": "/var/lib/postgresql/data",
"Driver": "local",
"RW": true
}
]

If Mounts is an empty array [], you’ve got no volume. You’re running on ephemeral storage. One restart and it’s gone.

The Nuclear Option: Check Your Data

Before you blow up a container, verify your data’s actually persisting:

Terminal window
docker volume ls
docker volume inspect postgres-data

See a Mountpoint? That’s your data on the host. You can even browse it directly:

Terminal window
sudo ls /var/lib/docker/volumes/postgres-data/_data/

If that directory is empty or doesn’t exist, you know you’re in trouble.

Protecting Volumes From Accidental Deletion

docker-compose down removes containers and networks. docker-compose down -v removes volumes too. You don’t want to run -v by accident.

Add volume protection in your Compose file:

docker-compose.yml
volumes:
postgres-data:
external: true # won't be created or deleted by Compose

With external: true, Compose refuses to create or delete the volume — you have to manage it manually. This means docker-compose down -v won’t touch it. The tradeoff: you need to create the volume beforehand:

Terminal window
docker volume create postgres-data
docker-compose up -d

Backup Before You Upgrade

Before any major container update, dump your data:

Terminal window
# Postgres
docker exec postgres pg_dump -U postgres mydb > backup_$(date +%Y%m%d).sql
# SQLite (file-based)
docker cp myapp:/data/app.db ./app.db.backup
# Generic: copy from volume
docker run --rm -v postgres-data:/data -v $(pwd):/backup alpine tar czf /backup/pg-data.tar.gz -C /data .

That last one is useful for any volume — it spins up an Alpine container, mounts your volume and a local dir, and tars it up.

Bottom line: volumes aren’t optional. They’re not advanced Docker stuff. They’re the difference between “my data persists” and “my data is a whiteboard in a box that just got thrown away.”


Share this post on:

Send a Webmention

Written about this post on your own site? Send a webmention and it may appear here.


Previous Post
Podman Quadlets: Running Containers Without the Docker Daemon (or Your Sanity)
Next Post
nmap for Your Own Network: What You Should Be Scanning

Related Posts