Your server takes forever to boot. You restart it once a month, and it sits for 5 minutes doing… something. You have no idea what’s slow, so you just wait and move on. But what if you could see exactly which services are dragging things down?
systemd-analyze is a built-in tool that shows you boot performance. It’s criminally underused. Run it once, find the slow services, and shave minutes off your boot time.
The Quick Overview
$ systemd-analyzeStartup finished in 2.543s (kernel) + 8.234s (userspace) = 10.777sgraphical.target reached after 8.234s in userspaceYour system took 2.5 seconds for the kernel to boot, then 8.2 seconds of userspace services. Total 10.8 seconds. Is that good? On a VM with nothing running, it should be sub-5 seconds. On a loaded server, 10-20 seconds is normal.
Find the Actual Bottlenecks
$ systemd-analyze blame 3.245s systemd-udev-settle.service 2.102s postgres@13-main.service 1.876s mysql.service 1.234s docker.service 0.987s systemd-journal-flush.service 0.654s ssh.service 0.345s networking.service 0.123s systemd-sysctl.serviceSorted by time taken. This is the real offender list. systemd-udev-settle is taking 3.2 seconds. That’s huge.
But wait — this doesn’t tell you why things are slow or what they’re waiting on. Use critical-chain:
$ systemd-analyze critical-chaingraphical.target @8.234s└─multi-user.target @8.234s └─docker.service @6.234s +1.234s └─basic.target @3.456s └─postgresql.service @2.234s +2.102s └─system-getty.slice @0.234s └─...This shows the dependency chain. graphical.target depends on multi-user.target. multi-user.target depends on docker.service, which took 1.2 seconds and waited for dependencies. postgresql.service took 2.1 seconds (the + symbol shows how long it actually took).
The critical path is: kernel → getty → postgresql (2.1s) → docker (1.2s) → multi-user (0s, just a synchronization point).
That’s what you need to optimize: postgresql and docker in this example.
Visualize It
$ systemd-analyze plot > boot.svgOpens an SVG file showing the boot timeline graphically. Services are color-coded and arranged by start time. You can see parallelization failures (when services start sequentially instead of in parallel).
Deep Dive: One Service’s Startup
Want to know exactly what a service is doing during startup?
$ journalctl -u postgres@13-main.service-- Logs begin at Tue 2025-05-06 10:23:45 UTC, end at Tue 2025-05-06 10:23:48 UTC. --May 06 10:23:45 server postgres[1234]: [initdb] initializing...May 06 10:23:45 server postgres[1234]: [initdb] done (15.234s)May 06 10:23:48 server postgres[1234]: ready for connectionsPostgres took 2.1 seconds total. Initdb took 15 seconds, but that only happens on first run. After that, it should start in under 1 second.
Common Bottlenecks and Fixes
1. systemd-udev-settle
This waits for all device discovery to finish. On systems with many devices or slow storage, it can take seconds.
Check what it’s waiting on:
$ udevadm info --query=all --name=/dev/sdaFix: If you don’t need all devices, disable the settle service:
$ sudo systemctl mask systemd-udev-settle.serviceOr set a timeout in /etc/systemd/system.conf:
[Manager]DefaultTimeoutStartSec=10s2. Database Services (postgres, mysql)
These often have recovery or initialization steps on startup.
Check the logs:
$ journalctl -u postgres@13-main.service -n 50Look for long operations. Examples:
- WAL recovery (reapplying transaction logs after a crash)
- Checkpoint operations
- Index repair
Fix: These are usually one-time. After a clean shutdown, the next boot is fast. If slow every boot:
# Check if the database crashed$ systemctl status postgres# If it shows "failed", there's probably recovery happening
# Force a clean shutdown next time$ sudo systemctl stop postgres$ sudo systemctl start postgres3. Network Services
Services like networking.service or dhclient waiting for network.
$ systemd-analyze critical-chain | grep networkingnetworking.service @2.345s +1.234sFix: Set timeouts so they don’t block boot:
[Service]TimeoutStartSec=5sOr disable waiting for network if not critical:
[Service]Type=simpleExecStart=/usr/bin/myappAfter=network-online.targetWants=network-online.target4. Docker
Docker initializes cgroups, loads images, and starts containers. On cold boot, this is slow.
$ journalctl -u docker.serviceMay 06 10:23:45 server dockerd[1234]: Loading containers... done (3.456s)May 06 10:23:46 server dockerd[1234]: Loading images... done (0.234s)May 06 10:23:47 server dockerd[1234]: Starting containers... done (0.567s)Fix: Make Docker’s startup lazy. Add to /etc/docker/daemon.json:
{ "storage-driver": "overlay2", "userland-proxy": false, "disable-legacy-registry": true}Or delay the docker service:
$ sudo systemctl disable docker.service# Then start it manually or with a timerPractical Example
Your server boots in 20 seconds, and you want it under 10.
$ systemd-analyze blame 8.234s postgres@13-main.service 6.123s docker.service 3.456s nginx.servicePostgres takes 8 seconds. Docker takes 6. These are dependencies for your app.
Step 1: Check if postgres is even needed at boot
$ systemctl list-dependencies graphical.target | grep postgresIf not directly required, remove it from the boot chain:
$ sudo systemctl disable postgres@13-main.service$ sudo systemctl start postgres # Start it manually when neededStep 2: Make docker start in parallel
$ sudo systemctl edit docker.serviceAdd:
[Unit]After=network.target# Remove any dependencies that serialize startupStep 3: Check the new boot time
$ systemctl reboot# After reboot$ systemd-analyzeGoal achieved.
Key Commands Reference
# Total boot time$ systemd-analyze
# Time per service (sorted)$ systemd-analyze blame
# Dependency chain (what blocked what)$ systemd-analyze critical-chain
# Visual timeline$ systemd-analyze plot > boot.svg
# Service logs$ journalctl -u servicename -n 50
# List what depends on a service$ systemctl list-dependencies servicenameKey Takeaway
Boot performance rarely matters until you’re rebooting frequently. But when you are, systemd-analyze shows you exactly where the time goes. Most slow boots have just 2-3 culprits. Find them, disable unnecessary ones, parallelize where possible, and you’re done.
Five minutes with systemd-analyze can save you years of frustration.