Skip to content
Go back

Diagnosing Slow Linux Boot with systemd-analyze

By SumGuy 5 min read
Diagnosing Slow Linux Boot with systemd-analyze

Your server takes forever to boot. You restart it once a month, and it sits for 5 minutes doing… something. You have no idea what’s slow, so you just wait and move on. But what if you could see exactly which services are dragging things down?

systemd-analyze is a built-in tool that shows you boot performance. It’s criminally underused. Run it once, find the slow services, and shave minutes off your boot time.

The Quick Overview

Terminal window
$ systemd-analyze
Startup finished in 2.543s (kernel) + 8.234s (userspace) = 10.777s
graphical.target reached after 8.234s in userspace

Your system took 2.5 seconds for the kernel to boot, then 8.2 seconds of userspace services. Total 10.8 seconds. Is that good? On a VM with nothing running, it should be sub-5 seconds. On a loaded server, 10-20 seconds is normal.

Find the Actual Bottlenecks

Terminal window
$ systemd-analyze blame
3.245s systemd-udev-settle.service
2.102s postgres@13-main.service
1.876s mysql.service
1.234s docker.service
0.987s systemd-journal-flush.service
0.654s ssh.service
0.345s networking.service
0.123s systemd-sysctl.service

Sorted by time taken. This is the real offender list. systemd-udev-settle is taking 3.2 seconds. That’s huge.

But wait — this doesn’t tell you why things are slow or what they’re waiting on. Use critical-chain:

Terminal window
$ systemd-analyze critical-chain
graphical.target @8.234s
└─multi-user.target @8.234s
└─docker.service @6.234s +1.234s
└─basic.target @3.456s
└─postgresql.service @2.234s +2.102s
└─system-getty.slice @0.234s
└─...

This shows the dependency chain. graphical.target depends on multi-user.target. multi-user.target depends on docker.service, which took 1.2 seconds and waited for dependencies. postgresql.service took 2.1 seconds (the + symbol shows how long it actually took).

The critical path is: kernel → getty → postgresql (2.1s) → docker (1.2s) → multi-user (0s, just a synchronization point).

That’s what you need to optimize: postgresql and docker in this example.

Visualize It

Terminal window
$ systemd-analyze plot > boot.svg

Opens an SVG file showing the boot timeline graphically. Services are color-coded and arranged by start time. You can see parallelization failures (when services start sequentially instead of in parallel).

Deep Dive: One Service’s Startup

Want to know exactly what a service is doing during startup?

Terminal window
$ journalctl -u postgres@13-main.service
-- Logs begin at Tue 2025-05-06 10:23:45 UTC, end at Tue 2025-05-06 10:23:48 UTC. --
May 06 10:23:45 server postgres[1234]: [initdb] initializing...
May 06 10:23:45 server postgres[1234]: [initdb] done (15.234s)
May 06 10:23:48 server postgres[1234]: ready for connections

Postgres took 2.1 seconds total. Initdb took 15 seconds, but that only happens on first run. After that, it should start in under 1 second.

Common Bottlenecks and Fixes

1. systemd-udev-settle

This waits for all device discovery to finish. On systems with many devices or slow storage, it can take seconds.

Check what it’s waiting on:

Terminal window
$ udevadm info --query=all --name=/dev/sda

Fix: If you don’t need all devices, disable the settle service:

Terminal window
$ sudo systemctl mask systemd-udev-settle.service

Or set a timeout in /etc/systemd/system.conf:

/etc/systemd/system.conf
[Manager]
DefaultTimeoutStartSec=10s

2. Database Services (postgres, mysql)

These often have recovery or initialization steps on startup.

Check the logs:

Terminal window
$ journalctl -u postgres@13-main.service -n 50

Look for long operations. Examples:

Fix: These are usually one-time. After a clean shutdown, the next boot is fast. If slow every boot:

Terminal window
# Check if the database crashed
$ systemctl status postgres
# If it shows "failed", there's probably recovery happening
# Force a clean shutdown next time
$ sudo systemctl stop postgres
$ sudo systemctl start postgres

3. Network Services

Services like networking.service or dhclient waiting for network.

Terminal window
$ systemd-analyze critical-chain | grep networking
networking.service @2.345s +1.234s

Fix: Set timeouts so they don’t block boot:

/etc/systemd/system/networking.service.d/override.conf
[Service]
TimeoutStartSec=5s

Or disable waiting for network if not critical:

/etc/systemd/system/myapp.service
[Service]
Type=simple
ExecStart=/usr/bin/myapp
After=network-online.target
Wants=network-online.target

4. Docker

Docker initializes cgroups, loads images, and starts containers. On cold boot, this is slow.

Terminal window
$ journalctl -u docker.service
May 06 10:23:45 server dockerd[1234]: Loading containers... done (3.456s)
May 06 10:23:46 server dockerd[1234]: Loading images... done (0.234s)
May 06 10:23:47 server dockerd[1234]: Starting containers... done (0.567s)

Fix: Make Docker’s startup lazy. Add to /etc/docker/daemon.json:

/etc/docker/daemon.json
{
"storage-driver": "overlay2",
"userland-proxy": false,
"disable-legacy-registry": true
}

Or delay the docker service:

Terminal window
$ sudo systemctl disable docker.service
# Then start it manually or with a timer

Practical Example

Your server boots in 20 seconds, and you want it under 10.

Terminal window
$ systemd-analyze blame
8.234s postgres@13-main.service
6.123s docker.service
3.456s nginx.service

Postgres takes 8 seconds. Docker takes 6. These are dependencies for your app.

Step 1: Check if postgres is even needed at boot

Terminal window
$ systemctl list-dependencies graphical.target | grep postgres

If not directly required, remove it from the boot chain:

Terminal window
$ sudo systemctl disable postgres@13-main.service
$ sudo systemctl start postgres # Start it manually when needed

Step 2: Make docker start in parallel

Terminal window
$ sudo systemctl edit docker.service

Add:

[Unit]
After=network.target
# Remove any dependencies that serialize startup

Step 3: Check the new boot time

Terminal window
$ systemctl reboot
# After reboot
$ systemd-analyze

Goal achieved.

Key Commands Reference

Terminal window
# Total boot time
$ systemd-analyze
# Time per service (sorted)
$ systemd-analyze blame
# Dependency chain (what blocked what)
$ systemd-analyze critical-chain
# Visual timeline
$ systemd-analyze plot > boot.svg
# Service logs
$ journalctl -u servicename -n 50
# List what depends on a service
$ systemctl list-dependencies servicename

Key Takeaway

Boot performance rarely matters until you’re rebooting frequently. But when you are, systemd-analyze shows you exactly where the time goes. Most slow boots have just 2-3 culprits. Find them, disable unnecessary ones, parallelize where possible, and you’re done.

Five minutes with systemd-analyze can save you years of frustration.


Share this post on:

Send a Webmention

Written about this post on your own site? Send a webmention and it may appear here.


Previous Post
Is fail2ban Actually Working? Here's How to Check
Next Post
Docker CMD vs ENTRYPOINT: The Final Answer

Related Posts