Skip to content
Go back

Docker Exit Codes: Why Your Container Keeps Restarting

By SumGuy 5 min read
Docker Exit Codes: Why Your Container Keeps Restarting

Your container restarts every 30 seconds. You don’t know why. You check the logs and there’s nothing. Or worse, the logs are empty.

Exit codes. Docker uses them to tell you why a container crashed, but most people never look at them. Let’s fix that.

The Exit Code Basics

When a container exits, Docker records an exit code. This comes from the process inside the container, or from Docker itself.

Terminal window
docker inspect mycontainer | grep -A 5 State

Look for "ExitCode": X. That number is your clue.

The Exit Code Reference

0 — Success

The container exited cleanly. No error. This is what you want. If restart: always is set and the container keeps restarting on exit 0, check your command — maybe it’s finishing and exiting intentionally.

1 — Application Error

The app inside crashed or exited with status 1. This is your app’s problem, not Docker. Check the logs:

Terminal window
docker logs mycontainer

Common causes: out of memory on the app, unhandled exception, missing dependency, wrong env var.

2 — Misuse of Shell Builtins

Less common in containers, but if you see this, you’ve got a shell script with bad syntax.

125 — Docker Run Error

Docker couldn’t run the container. Usually: bad image, missing volume mount, invalid flag. Check docker run syntax:

Terminal window
docker run --invalid-flag myimage
# Error response from daemon: unknown flag: --invalid-flag

126 — Container Cannot Invoke Command

The entrypoint or command can’t be executed. Usually: file doesn’t exist, wrong permissions, script is binary when it should be shell.

Example: You set ENTRYPOINT ["/app/server"] but /app/server doesn’t exist in the image.

# Bad
ENTRYPOINT ["/app/server"]
# Good
COPY --chown=app:app ./server /app/server
RUN chmod +x /app/server
ENTRYPOINT ["/app/server"]

127 — Command Not Found

You’re trying to run a command that doesn’t exist in the container. Very common with shell scripts that assume tools are installed.

Terminal window
# In your script
redis-cli ping # ERROR: redis-cli not found in container
# Fix: install it first
RUN apt-get update && apt-get install -y redis-tools

128 + N — Fatal Signal

Docker received a kill signal. N is the signal number. Add 128 to the signal number:

137 — Out of Memory

The container hit its memory limit and got SIGKILL’d. This is the OOM killer’s signature.

Check your memory limits:

Terminal window
docker inspect mycontainer | grep -i memory

And current usage:

Terminal window
docker stats mycontainer

If the container is using 98% of its limit and then exits 137, increase the limit:

Terminal window
docker run -m 2g myimage

Or in docker-compose:

docker-compose.yml
services:
app:
image: myimage
mem_limit: 2g

143 — Graceful Shutdown

This is normal when you stop a container. Docker sends SIGTERM, the app has 10 seconds to shut down, then gets SIGKILL if it doesn’t.

If a container is constantly exiting with 143, something is sending SIGTERM — check orchestration, health checks, or orchestrator restart policies.

Debugging in Real Time

Watch a container exit:

Terminal window
docker run --rm myimage bash -c "exit 42"
echo $? # prints 42

Or let Docker tell you:

Terminal window
docker run --rm myimage /nonexistent
# docker: Error response from daemon: OCI runtime exec failed:
# exec format error

For containers that keep restarting, add a sleep to keep it alive:

Terminal window
docker run -it myimage sleep 300

Then check what’s actually happening in the logs and environment.

Check Exit Code in Compose

Terminal window
docker-compose ps
docker-compose logs myservice
docker inspect $(docker-compose ps -q myservice) | grep ExitCode

The Restart Policy Trap

If your container has restart: always, even an intentional exit 0 restarts it. Make sure your command isn’t finishing when it should keep running:

docker-compose.yml
services:
web:
image: nginx
restart: always # nginx doesn't exit unless killed

But:

docker-compose.yml
services:
migrate:
image: myapp
command: python manage.py migrate
# This finishes and exits 0, which is correct
# don't use restart: always here

Common Gotchas

Buffering: If you see no logs before an exit, stdout/stderr might be buffered. Force unbuffered mode:

ENV PYTHONUNBUFFERED=1

Health Check Killing the Container:

docker-compose.yml
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 10s
timeout: 5s
retries: 3
# After 3 failed retries, Docker kills the container

Check if health checks are passing:

Terminal window
docker inspect mycontainer | grep -A 20 Health

Exit codes are Docker’s way of telling a story. Exit 0 means “I finished.” Exit 137 means “I ran out of memory.” Exit 1 means “Something went wrong inside.” Listen to what the exit code is telling you, and you’ll debug 10x faster.


Share this post on:

Send a Webmention

Written about this post on your own site? Send a webmention and it may appear here.


Previous Post
lsof: The Tool That Shows You Everything
Next Post
CPU and I/O Priority with nice and ionice

Related Posts