Skip to content
SumGuy's Ramblings
Go back

Systemd Socket Activation: Start Services Only When Someone Actually Knocks

Your Server Is Running Services Nobody’s Using Right Now

Here’s a dirty secret about most home lab setups: half the services you’ve got running are just sitting there, consuming RAM and contributing nothing. The SSH daemon waiting patiently. That little custom API server you wrote at midnight. The print spooler you forgot exists.

Systemd socket activation is the answer to that waste. The idea is simple — instead of starting a service at boot and leaving it running forever, you start just the socket, and the actual service only wakes up when someone knocks. No traffic, no process. Traffic arrives, process spins up, handles it, optionally goes back to sleep.

This isn’t a new concept. Inetd has been doing it since the 1980s. Systemd just made it not terrible to configure.

What Socket Activation Actually Does

When systemd manages a socket, it creates and holds the listening socket file descriptor before the service starts. The kernel queues incoming connections. When a connection actually arrives, systemd hands that pre-opened socket to the service process as it starts up.

This means:

The trade-off: that first connection pays the startup cost. For SSH, that’s fine. For a real-time trading platform, maybe think twice.

The Unit File Pair

Socket activation requires two unit files: a .socket unit and a matching .service unit. They’re linked by name — foo.socket activates foo.service.

The Socket Unit

# /etc/systemd/system/myapp.socket
[Unit]
Description=MyApp Socket

[Socket]
ListenStream=8080
Accept=no

[Install]
WantedBy=sockets.target

ListenStream is for TCP. You can also use:

Accept=no means systemd passes the single listening socket to your service and it handles accept() itself. This is what you want for most daemons.

Accept=yes is the inetd model — systemd accepts each connection and spawns a new service instance per connection. Each instance gets the connected socket, not the listening one. Useful for simple one-shot handlers.

The Service Unit

# /etc/systemd/system/myapp.service
[Unit]
Description=MyApp Service
Requires=myapp.socket

[Service]
ExecStart=/usr/local/bin/myapp
StandardInput=socket

[Install]
WantedBy=multi-user.target

StandardInput=socket tells systemd to pass the socket as stdin when using Accept=yes. For Accept=no, your app needs to know to look at file descriptor 3 (or use the sd_listen_fds() helper from libsystemd).

The Requires=myapp.socket isn’t strictly necessary — systemd figures out the dependency by name — but it’s explicit and good practice.

A Real Example: On-Demand Web Server

Say you’ve got a lightweight Flask app that only gets hit occasionally:

# /etc/systemd/system/flask-app.socket
[Unit]
Description=Flask App Socket

[Socket]
ListenStream=5000
SocketUser=www-data
SocketMode=0660

[Install]
WantedBy=sockets.target
# /etc/systemd/system/flask-app.service
[Unit]
Description=Flask App
After=network.target

[Service]
User=www-data
WorkingDirectory=/opt/flask-app
ExecStart=/opt/flask-app/venv/bin/gunicorn -w 2 -b fd://0 app:app
StandardInput=socket
Restart=on-failure

[Install]
WantedBy=multi-user.target

The fd://0 tells gunicorn to use the socket passed as file descriptor 0. Nginx, Apache, and most modern servers understand this pattern.

Enable it with just the socket:

sudo systemctl enable --now flask-app.socket

Don’t enable the service directly. Let the socket do the activating.

SSH with Socket Activation

OpenSSH ships with socket activation support out of the box on most distros. Check if you already have it:

ls /lib/systemd/system/ssh.socket
systemctl status ssh.socket

On some systems (Ubuntu, Debian), SSH has switched to socket activation by default. The config looks like:

# ssh.socket
[Socket]
ListenStream=22
Accept=yes

With Accept=yes, each incoming SSH connection spawns a fresh sshd instance. The main sshd process itself doesn’t persist between connections. Honestly a cleaner model than the old fork-from-daemon approach.

Custom Unix Socket Example

For a local daemon communicating via Unix socket — say, a custom monitoring agent:

# /etc/systemd/system/monitor-agent.socket
[Unit]
Description=Monitor Agent Unix Socket

[Socket]
ListenStream=/run/monitor-agent.sock
SocketUser=root
SocketGroup=monitoring
SocketMode=0660
RemoveOnStop=yes

[Install]
WantedBy=sockets.target
# /etc/systemd/system/monitor-agent.service
[Unit]
Description=Monitor Agent
Requires=monitor-agent.socket

[Service]
ExecStart=/usr/local/bin/monitor-agent --socket /run/monitor-agent.sock
Restart=on-failure

The socket file appears at /run/monitor-agent.sock the moment you enable the socket unit, even before the service starts. Clients can connect immediately — they’ll just queue while the service boots.

Testing Without Deploying: systemd-socket-activate

Before you wire this into production, use systemd-socket-activate to test your service locally:

systemd-socket-activate -l 8080 /usr/local/bin/myapp

This creates a socket on port 8080 and passes it to your binary when a connection arrives, exactly like systemd would. No unit files needed. Essential for debugging whether your app actually speaks the socket activation protocol correctly.

# Test with a quick curl in another terminal
curl http://localhost:8080/health

You’ll see the service start, handle the request, and (if it exits after handling) the socket keeps listening for the next request.

Checking What’s Happening

# See all socket units
systemctl list-units --type=socket

# Check if a socket triggered its service
systemctl status myapp.socket
systemctl status myapp.service

# Watch activation events in real time
journalctl -f -u myapp.socket -u myapp.service

The status output for the socket will show Triggers: myapp.service and how many connections have been accepted.

Accept=yes vs Accept=no: When to Use Each

Accept=no (default, recommended for most cases):

Accept=yes:

For Accept=yes, your service gets the socket as stdin/stdout and can just read/write like it’s talking to a pipe:

#!/usr/bin/env python3
import sys
data = sys.stdin.read()
sys.stdout.write(f"You sent: {data}")

That’s a complete network service. One connection per process, fully automatic.

Service Auto-Stop After Idle

The real power move: combine socket activation with TimeoutStopSec and a short idle timeout in your service to actually free up RAM when nobody’s using it:

[Service]
ExecStart=/usr/local/bin/myapp
Restart=on-failure
TimeoutStopSec=30
# Your app should exit after idle timeout, then systemd restarts on next connection

Your app needs to implement the idle timeout itself (exit after N seconds without a request), but the socket keeps listening. Next connection? Systemd wakes the service back up. Your RAM is free in the meantime.

When Socket Activation Doesn’t Make Sense

Not everything benefits from this. Skip it for:

For lightweight HTTP services, custom daemons, development servers, and anything that gets hit occasionally? Socket activation is a straightforward win. Your RAM will thank you, and honestly, it’s just a cleaner architecture — services that exist only when they’re needed.

Your 2 AM self will appreciate having 400MB of RAM free when debugging something unrelated.


Share this post on:

Previous Post
Building a Private Docker Registry with Harbor
Next Post
Vault vs Infisical: Secrets Management for Teams Who've Learned the Hard Way