Skip to content
SumGuy's Ramblings
Go back

DDoS Mitigation: Teaching Your Server to Say No Politely (Then Impolitely)

Nobody’s Coming For You Specifically. They Don’t Need To.

Your self-hosted Gitea or Nextcloud isn’t on some APT group’s target list. But it absolutely is on every automated scanner’s rotation. Shodan indexes your IP within hours of it going live. Bots don’t need a reason — they probe everything on port 80 and 443, trying default creds, common CVEs, and login forms with credential dumps.

A volumetric DDoS from a botnet? Probably not your problem — that takes coordination and your home lab isn’t worth the effort. But application-layer hammering from a few hundred bots trying to brute-force your WordPress login? That will absolutely bring down a $5 VPS if you’re not ready for it.

Here’s the practical threat model for self-hosters, and how to handle each layer.

Understanding What’s Actually Coming at You

Volumetric Attacks

Raw bandwidth floods — UDP, ICMP, DNS amplification. Designed to saturate your pipe. If you’re on a home connection or a small VPS, you literally cannot defend against this at the server level. Your ISP sees the traffic before you do. This is where a CDN or upstream scrubbing service is the only real answer.

Protocol Attacks

SYN floods, Ping of Death, malformed packets designed to exhaust connection tables. Your firewall handles these, mostly. Modern kernels have SYN cookie protection enabled by default. Still worth configuring.

Application Layer Attacks (L7)

HTTP floods, credential stuffing, scraping, slow-read attacks. This is where 99% of self-hosters actually get hurt. These look like legitimate requests, just… many of them. Rate limiting, Fail2ban, and bot challenges live here.

Layer 1: Nginx Rate Limiting

The limit_req_zone directive is stupidly powerful and criminally underused.

# In nginx.conf http block — define zones
http {
    # Rate limit by IP: 10 requests per second, 10MB zone storage
    limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;

    # Stricter limit for login endpoints
    limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;

    # Rate limit by IP+URL combo (useful for API endpoints)
    limit_req_zone $binary_remote_addr$request_uri zone=api:10m rate=5r/s;
}

Apply in your server block:

server {
    listen 443 ssl;
    server_name yourdomain.com;

    # General rate limit — burst 20 requests, then queue
    location / {
        limit_req zone=general burst=20 nodelay;
        limit_req_status 429;
        # ... proxy_pass or root
    }

    # Hammer the login page with no mercy
    location /wp-login.php {
        limit_req zone=login burst=3 nodelay;
        limit_req_status 429;
        # ... proxy_pass
    }

    location /api/ {
        limit_req zone=api burst=10 nodelay;
        limit_req_status 429;
        # ... proxy_pass
    }
}

The burst parameter lets legitimate users send a quick cluster of requests without hitting 429. The nodelay flag means burst requests get served immediately rather than queued. Without nodelay, Nginx slows responses down to the zone rate — which looks like your site being slow rather than rate-limited.

Add a custom 429 page so users know what happened:

error_page 429 /429.html;
location = /429.html {
    root /var/www/html;
    internal;
}

Layer 2: Fail2ban for HTTP Brute Force

Nginx rate limiting returns 429 but keeps the connection open. Fail2ban watches your logs and bans the IP at the firewall level.

apt install fail2ban

# Create /etc/fail2ban/jail.local
[DEFAULT]
bantime  = 3600
findtime = 600
maxretry = 5
backend  = systemd

[nginx-http-auth]
enabled = true
port    = http,https
logpath = /var/log/nginx/error.log

[nginx-limit-req]
enabled  = true
port     = http,https
logpath  = /var/log/nginx/error.log
maxretry = 10

[nginx-botsearch]
enabled  = true
port     = http,https
logpath  = /var/log/nginx/access.log
maxretry = 2
systemctl enable fail2ban
systemctl start fail2ban

# Check what's banned
fail2ban-client status nginx-limit-req

# Manually ban an IP
fail2ban-client set nginx-limit-req banip 1.2.3.4

# Unban
fail2ban-client set nginx-limit-req unbanip 1.2.3.4

Fail2ban writes iptables rules. The banned IP can’t even complete a TCP handshake with your server. Satisfying.

Layer 3: UFW Rate Limiting

If you’re not using Fail2ban, UFW has basic rate limiting built in:

# Limit SSH connections (built-in)
ufw limit ssh

# Limit HTTP/HTTPS (6 connections per 30 seconds per IP)
ufw limit 80/tcp
ufw limit 443/tcp

This is blunt — it limits TCP connections, not application-level requests. Good as a floor, not a ceiling.

Layer 4: iptables Connection Limiting

For more surgical connection limiting:

# Limit new connections to 20 per minute per IP
iptables -A INPUT -p tcp --dport 80 -m state --state NEW \
  -m recent --set --name HTTP

iptables -A INPUT -p tcp --dport 80 -m state --state NEW \
  -m recent --update --seconds 60 --hitcount 20 --name HTTP \
  -j DROP

# Limit concurrent connections per IP to 50
iptables -A INPUT -p tcp --dport 80 \
  -m connlimit --connlimit-above 50 \
  -j REJECT --reject-with tcp-reset

# SYN flood protection
iptables -A INPUT -p tcp --syn \
  -m limit --limit 1/s --limit-burst 3 \
  -j ACCEPT

iptables -A INPUT -p tcp --syn -j DROP

Save your rules:

apt install iptables-persistent
netfilter-persistent save

Layer 5: Traefik Rate Limit Middleware

If you’re running Traefik as your reverse proxy:

# traefik/dynamic/middlewares.yml
http:
  middlewares:
    rate-limit:
      rateLimit:
        average: 10
        burst: 20
        period: 1s
        sourceCriterion:
          ipStrategy:
            depth: 1

    strict-rate-limit:
      rateLimit:
        average: 1
        burst: 3
        period: 1s

Apply to a router in your service definition:

labels:
  - "traefik.http.routers.myapp.middlewares=rate-limit@file"

Layer 6: CrowdSec — Collective Intelligence

Fail2ban is reactive: it bans IPs after they’ve already hit you. CrowdSec is collaborative: it shares ban lists with a community of thousands of servers. When an IP hammers someone else’s server, your server finds out and blocks it proactively.

# Install
curl -s https://packagecloud.io/install/repositories/crowdsec/crowdsec/script.deb.sh | bash
apt install crowdsec

# Install the Nginx collection
cscli collections install crowdsecurity/nginx

# Install the iptables bouncer (this actually does the blocking)
apt install crowdsec-firewall-bouncer-iptables

# Check what it's catching
cscli alerts list
cscli decisions list

# CrowdSec dashboard (optional)
cscli dashboard setup

CrowdSec parses your Nginx logs, detects attack patterns, and pushes decisions to the firewall bouncer. The community feed means you’re blocking known bad actors before they even knock on your door.

The free tier covers everything you need for a self-hosted setup. The paid tier adds the management console and more intelligence feeds.

Layer 7: Cloudflare Free Tier — The Nuclear Option

If you’re getting hit hard enough that none of the above is keeping up, put Cloudflare in front of everything. The free tier includes:

Point your domain’s NS records to Cloudflare, proxy your A records (orange cloud), and most attack traffic never touches your origin server.

# In Cloudflare: Security > Settings
# Bot Fight Mode: On
# Browser Integrity Check: On
# Security Level: Medium

# Create a rate limiting rule:
# If: ip.src is not in {your_home_ip} AND http.request.uri.path contains "/wp-login"
# Then: Block after 5 requests per minute

The tradeoff: Cloudflare sees all your traffic. For a public blog, that’s fine. For something sensitive, you’ve traded one risk for another. Know your threat model.

When to Just Give Up and Use a CDN

Here’s the honest answer: if you’re on a home connection and someone points a real botnet at you, you’re done. Your ISP will null-route your IP before you can configure anything.

For a $5 VPS, the math is similar — you’re renting 1Gbps shared bandwidth. A modest amplification attack saturates that instantly.

The practical layers for self-hosters, in order of “most people need this”:

  1. Nginx rate limiting — always, no excuses, takes 10 minutes
  2. Fail2ban — always, takes 15 minutes
  3. Cloudflare proxy — if you’re public-facing and care about uptime
  4. CrowdSec — great addition, minimal overhead
  5. iptables connlimit — for extra paranoia
  6. Upstream DDoS scrubbing — only if you’re spending money and running something critical

Most attacks that actually bring down self-hosted setups are application-layer stuff — bots hammering login forms, scrapers ignoring robots.txt, vulnerability scanners. Nginx rate limiting and Fail2ban stop 95% of that cold.

Your 2 AM self will thank you for setting up rate limiting at 10 PM before bed. Not after waking up to a crashed server and a notification that your database is corrupted from an emergency shutdown.


Share this post on:

Previous Post
Loki vs ELK Stack: Taming Your Logs Without Taming Your Budget
Next Post
LVM Advanced: Snapshots, Thin Provisioning, and Not Losing Your Data