Skip to content
SumGuy's Ramblings
Go back

Uptime Kuma: Status Pages, Alerts, and Knowing Before Your Users Do

Finding Out Your Service Is Down From a User Is a Special Kind of Shame

There’s a specific feeling you get when someone messages you “hey, is your thing broken?” and you didn’t know. You check. It’s been down for 45 minutes. Your monitoring was set up, technically, but the alert went to an email address you check twice a week.

Uptime Kuma fixes the notification problem. It’s a self-hosted monitoring tool with a genuinely nice UI, a large number of notification integrations, and enough monitor types to cover most homelab and small production scenarios. The basic setup (HTTP monitor, email alert) takes ten minutes. The advanced setup (push monitors, Docker health, status pages, maintenance windows) takes an afternoon and then you never have to think about it again.


Docker Compose Setup That Handles Restarts Properly

services:
  uptime-kuma:
    image: louislam/uptime-kuma:latest
    restart: unless-stopped
    ports:
      - "3001:3001"
    volumes:
      - uptime-kuma-data:/app/data
    environment:
      - UPTIME_KUMA_PORT=3001

volumes:
  uptime-kuma-data:

That’s it. Hit http://your-server:3001, create an admin account, and you’re monitoring.

One important note on volumes: use a named volume (uptime-kuma-data) rather than a bind mount. Uptime Kuma writes an SQLite database to /app/data, and bind mounts can cause permission issues depending on your Docker setup. Named volumes are cleaner.

If you’re putting this behind a reverse proxy (you should), make sure to configure WebSocket support — Uptime Kuma uses WebSockets for live dashboard updates. In Caddy:

uptime.yourdomain.com {
    reverse_proxy uptime-kuma:3001
}

Caddy handles WebSocket proxying automatically. Nginx needs explicit configuration:

location / {
    proxy_pass http://uptime-kuma:3001;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_set_header Host $host;
}

Monitor Types Beyond Basic HTTP

Most people set up HTTP monitors and stop there. Here’s what you’re missing.

TCP Monitor

Checks that a port is open and accepting connections. Use this for:

Type: TCP Port
Hostname: postgres.internal
Port: 5432

TCP monitoring tells you the service is listening. It doesn’t tell you if it’s actually working — for that you need application-level health checks. But “port is closed” is a very useful signal.

DNS Monitor

Checks DNS resolution and optionally validates the resolved address. Great for:

Docker Container Monitor

If Uptime Kuma has access to your Docker socket, it can monitor container health status directly. This reports the Docker health check result — if your container has a HEALTHCHECK defined in its Dockerfile, Uptime Kuma shows whether it’s passing.

# Give Uptime Kuma Docker socket access
services:
  uptime-kuma:
    image: louislam/uptime-kuma:latest
    volumes:
      - uptime-kuma-data:/app/data
      - /var/run/docker.sock:/var/run/docker.sock:ro

Add :ro (read-only) — Uptime Kuma only needs to read container status, not control Docker.

Keyword Monitor

HTTP monitor with a twist: checks that the response body contains (or doesn’t contain) a specific string. Use this to verify your app is actually serving real content, not a cached error page or a CDN’s “maintenance” placeholder.

Type: HTTP(s) - Keyword
URL: https://app.yourdomain.com
Keyword: Dashboard
Expected: Keyword exists

If your app is returning a 200 with a “Site is under maintenance” splash page, a basic HTTP monitor sees 200 and reports up. A keyword monitor catches this.

Certificate Expiry Monitor

Checks TLS certificate validity and warns before expiry. This is the “I forgot to renew the cert” insurance policy. Set warning thresholds:

For services managed by Let’s Encrypt with auto-renewal (Caddy, Certbot), this is a belt-and-suspenders check. For internal services with manually managed certs, it’s essential.


Push Monitors: The Cron Job Killer Feature

This is the feature most people don’t know about, and it’s genuinely excellent.

Instead of Uptime Kuma pulling to check your service, push monitors work the other way: your service pushes to Uptime Kuma. You create a push monitor, get a unique URL, and call that URL from your cron job, backup script, or scheduled task. If Kuma doesn’t receive a push within the expected interval, it alerts.

This solves the “my nightly backup job silently stopped running” problem. Your backup script ends with:

#!/bin/bash
# ... backup operations ...

# If we got here, backup succeeded — notify Uptime Kuma
curl -s "https://uptime.yourdomain.com/api/push/your-unique-push-id?status=up&msg=Backup+completed&ping="

If the script fails mid-way, exits early, or doesn’t run at all, Kuma never gets the push and marks it down after the heartbeat interval.

Use push monitors for:


Notification Channels: Actually Getting Alerted

The UI for notification setup is self-explanatory. Here’s what each channel is best for:

Create a bot via @BotFather, get the token, find your chat ID:

# Get your chat ID after sending a message to your bot
curl https://api.telegram.org/bot<YOUR_TOKEN>/getUpdates

Telegram notifications are instant, work on every platform, and the Uptime Kuma integration sends a clean message with service name, status, and response time. For personal homelab monitoring, Telegram is hard to beat.

Discord

Create a webhook in your server: Server Settings → Integrations → Webhooks. Paste the webhook URL into Uptime Kuma. Discord notifications look nice with the embed format Uptime Kuma sends.

ntfy

ntfy is a self-hosted push notification service. If you want the whole monitoring stack self-hosted:

  1. Run ntfy: docker run -p 80:80 binwiederhier/ntfy serve
  2. Install ntfy on your phone
  3. Subscribe to a topic
  4. Configure Uptime Kuma to push to your ntfy server

Zero third-party services in the alert chain.

Slack and Teams

Good for team monitoring situations. Create an incoming webhook in your workspace, paste the URL. Uptime Kuma sends properly formatted messages.

Email via SMTP

Works fine, but consider the notification latency of whatever email provider you’re using. For urgent downtime alerts, Telegram/Discord/ntfy are more immediate.


The Status Page: Your Own statuspage.io

Uptime Kuma can generate a public status page — a clean, auto-updating page showing the current status of your services. This is what companies pay statuspage.io for.

Setting it up:

  1. Settings → Status Page → Create new status page
  2. Give it a slug (/status/myapp or map a custom domain)
  3. Add the monitors you want to display (you choose which ones are public)
  4. Set incident response text if needed

The status page shows:

You can display this at status.yourdomain.com (requires DNS setup and reverse proxy config to route to the status page endpoint).

For services with actual users — even just friends and family using your self-hosted stuff — having a public status page is surprisingly useful. “Is it down for everyone or just me?” gets answered without them having to message you.


Maintenance Windows

Scheduled maintenance means no false alerts and a maintenance notice on your status page.

In Uptime Kuma: Settings → Maintenance → Add Maintenance

Configure:

During maintenance windows, the affected monitors show as “maintenance” rather than “down” on the status page. You don’t get alerted. Your status page shows a maintenance banner instead of a red service status. Users see planned maintenance rather than unplanned downtime.


The API: Programmatic Integration

Uptime Kuma has an unofficial REST API (not officially documented but stable). Useful for:

# Authenticate
curl -X POST https://uptime.yourdomain.com/login \
  -H "Content-Type: application/json" \
  -d '{"username": "admin", "password": "yourpassword"}'

# The response includes a token — use it in subsequent requests

For more robust integration, uptime-kuma-api is a Python library that wraps the socket.io API:

from uptime_kuma_api import UptimeKumaApi

api = UptimeKumaApi("http://localhost:3001")
api.login("admin", "password")

# Add a monitor
api.add_monitor(
    type="http",
    name="My New App",
    url="https://app.yourdomain.com",
    interval=60
)

Useful in a deployment script: create a monitor when you spin up a new service, set a maintenance window during deploy, enable the monitor when deploy completes.


Database Backup: Don’t Learn This the Hard Way

Uptime Kuma stores everything in SQLite at /app/data/kuma.db. Back this up.

The simplest backup — add to your backup script:

# Stop Uptime Kuma briefly for a clean SQLite backup
docker stop uptime-kuma
cp /var/lib/docker/volumes/uptime-kuma-data/_data/kuma.db \
   /backups/kuma-$(date +%Y%m%d).db
docker start uptime-kuma

Or use SQLite’s built-in backup without stopping:

sqlite3 /path/to/kuma.db ".backup /backups/kuma-$(date +%Y%m%d).db"

SQLite’s backup command handles a live database safely. No stop/start required.

What you lose without a backup: all your monitors, notification configurations, alert history, and status page setup. Setting it up again takes a couple of hours. Backup takes 30 seconds. You know which one to choose.


Summary: A Monitoring Setup That Actually Works

A complete Uptime Kuma setup for a real homelab:

The whole thing takes a few hours to configure properly. After that, you find out about problems in minutes, not from user messages. That’s the upgrade.


Share this post on:

Previous Post
Watchtower vs Diun: Automating Docker Updates Without Burning Your Stack
Next Post
Chaos Engineering: Break Things on Purpose Before They Break Themselves