Your Nginx Config Has 47 Lines and Does One Thing. There’s a Better Way.
If you’ve spent any time with nginx reverse proxy configs, you know the ritual. Include this block. Reference that ssl_certificate path. Remember where the location / block goes versus the location /api/ block. Copy the magic SSL configuration snippet from Mozilla’s SSL configurator tool because nobody has those header values memorized. Add a proxy_set_header Host $host because you forgot it last time and spent an hour debugging why authentication wasn’t working.
Caddy is a different philosophy. Automatic HTTPS is on by default — you don’t configure it, you don’t reference cert paths, it just happens. A basic Caddy reverse proxy config is three lines. An advanced one with auth, rate limiting, and multiple services is maybe forty.
That’s not marketing. Let’s look at what “advanced Caddy” actually means.
Automatic HTTPS: How It Actually Works
When you point a domain at your Caddy server, Caddy:
- Detects that the domain needs a certificate
- Initiates an ACME challenge with Let’s Encrypt (or ZeroSSL — configurable)
- Handles the HTTP-01 challenge response itself
- Gets the certificate, stores it, serves HTTPS
- Auto-renews before expiry
You don’t touch any of this. Your Caddyfile has app.yourdomain.com and Caddy handles the certificate lifecycle. The only requirement: port 80 and 443 reachable, DNS pointing at your server.
For local networks and internal services, you can use the tls internal directive — Caddy issues its own CA and certificates, though browsers will warn unless you add Caddy’s root CA to your trust store.
Wildcard Certificates with DNS Challenge
HTTP-01 challenge requires port 80 publicly reachable. That doesn’t work for internal services or if you’re running behind Cloudflare Tunnels. DNS-01 challenge uses DNS records instead — Caddy adds a TXT record to prove domain ownership.
This is where Caddy’s plugin system becomes important. DNS challenge providers are plugins.
Building Caddy with DNS plugins (Docker approach):
FROM caddy:builder AS builder
RUN xcaddy build \
--with github.com/caddy-dns/cloudflare \
--with github.com/caddy-dns/route53
FROM caddy:latest
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
# docker-compose.yml
services:
caddy:
build: .
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy-data:/data
- caddy-config:/config
environment:
CLOUDFLARE_API_TOKEN: your-token-here
volumes:
caddy-data:
caddy-config:
Now in your Caddyfile:
*.yourdomain.com {
tls {
dns cloudflare {env.CLOUDFLARE_API_TOKEN}
}
@app host app.yourdomain.com
handle @app {
reverse_proxy localhost:8080
}
@api host api.yourdomain.com
handle @api {
reverse_proxy localhost:3000
}
handle {
respond "No matching service" 404
}
}
One wildcard certificate, multiple subdomains, zero per-service cert configuration.
Caddyfile Matchers: The Feature That Changes Everything
Matchers are how Caddy does conditional logic. Instead of nginx’s nested if blocks (which are genuinely weird), Caddy uses named matchers with the @ prefix.
example.com {
# Match path prefix
@api path /api/*
# Match path and method
@admin {
path /admin/*
method GET POST
}
# Match by header
@mobile header User-Agent *Mobile*
# Match remote IP
@internal remote_ip 192.168.0.0/16
handle @api {
reverse_proxy api-service:8000
}
handle @admin {
basicauth {
admin JDJhJDE0JE91S... # bcrypt hash
}
reverse_proxy admin-panel:9000
}
handle @internal {
reverse_proxy internal-tool:8888
}
# Default handler
handle {
reverse_proxy main-app:8080
}
}
Matchers can match on path, method, header, query string, remote IP, protocol, and more. They can be combined with and logic (all conditions in a matcher block must match) or separate matchers for or logic.
Snippets: DRY Config for Multiple Services
When you have fifteen services with the same security headers and logging config, snippets save you from copy-paste hell.
(security_headers) {
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains"
X-Frame-Options DENY
X-Content-Type-Options nosniff
Referrer-Policy strict-origin-when-cross-origin
-Server
}
}
(logging) {
log {
output file /var/log/caddy/access.log {
roll_size 100mb
roll_keep 5
}
format json
}
}
app1.yourdomain.com {
import security_headers
import logging
reverse_proxy app1:8080
}
app2.yourdomain.com {
import security_headers
import logging
reverse_proxy app2:3000
}
Forward Auth with Authelia or Authentik
This is the configuration that makes Caddy genuinely powerful for self-hosters. Forward auth outsources authentication to a separate service — every request goes through your auth service first, and only reaches the backend if the auth service approves.
With Authelia
authelia.yourdomain.com {
reverse_proxy authelia:9091
}
(authelia_auth) {
forward_auth authelia:9091 {
uri /api/verify?rd=https://authelia.yourdomain.com
copy_headers Remote-User Remote-Groups Remote-Email Remote-Name
}
}
protected-app.yourdomain.com {
import authelia_auth
reverse_proxy protected-app:8080
}
another-app.yourdomain.com {
import authelia_auth
reverse_proxy another-app:9000
}
Now every request to protected-app.yourdomain.com is intercepted by Caddy, checked with Authelia, and either passed through (with user info headers) or redirected to Authelia’s login page. Your application doesn’t need to know any of this is happening.
With Authentik
(authentik_auth) {
forward_auth authentik:9000 {
uri /outpost.goauthentik.io/auth/caddy
copy_headers X-authentik-username X-authentik-groups \
X-authentik-email X-authentik-name X-authentik-uid
trusted_proxies private_ranges
}
}
Rate Limiting
Rate limiting requires the caddy-ratelimit plugin, built in with xcaddy. Once installed:
app.yourdomain.com {
rate_limit {
zone api_zone {
key {remote_host}
events 100
window 1m
}
}
@api path /api/*
handle @api {
rate_limit {
zone api_zone
}
reverse_proxy api:8000
}
handle {
reverse_proxy frontend:3000
}
}
This limits each IP to 100 requests per minute on /api/* paths. Clients that exceed the limit get a 429 response.
On-Demand TLS
On-demand TLS is a feature you don’t know you need until you need it. It issues certificates at request time rather than at startup — useful if you’re running a multi-tenant application where users bring their own domains.
{
on_demand_tls {
ask http://localhost:8080/check-domain
interval 2m
burst 5
}
}
:443 {
tls {
on_demand
}
reverse_proxy app:8080
}
Caddy calls your /check-domain endpoint before issuing a cert for a new domain. Your app returns 200 (allow) or 4xx (deny). Caddy issues the cert, caches it, and serves the request. Future requests for the same domain use the cached cert.
This is how services like Vercel and Netlify handle custom domains. You can implement the same thing for your own multi-tenant app.
The JSON Config API
The Caddyfile is a convenience layer. Under the hood, Caddy runs a JSON config. You can interact with it directly via the Admin API (default: localhost:2019).
View current config:
curl localhost:2019/config/
Update a route without restarting:
curl -X POST localhost:2019/config/apps/http/servers/srv0/routes \
-H "Content-Type: application/json" \
-d '{
"match": [{"host": ["newapp.yourdomain.com"]}],
"handle": [{
"handler": "reverse_proxy",
"upstreams": [{"dial": "newapp:8080"}]
}]
}'
Reload config from file:
caddy reload --config /etc/caddy/Caddyfile
The Admin API enables programmatic configuration changes without file edits or service restarts. This is how caddy-docker-proxy works — it watches Docker events and updates Caddy’s config via the API when containers start or stop.
caddy-docker-proxy: Labels as Config
caddy-docker-proxy is a companion container that translates Docker labels into Caddy config, similar to Traefik’s label-based approach.
services:
caddy:
image: lucaslorentz/caddy-docker-proxy:latest
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- caddy-data:/data
networks:
- proxy
myapp:
image: myapp:latest
networks:
- proxy
labels:
caddy: app.yourdomain.com
caddy.reverse_proxy: "{{upstreams 8080}}"
caddy.header.Strict-Transport-Security: "max-age=31536000"
networks:
proxy:
external: true
When myapp starts, caddy-docker-proxy picks up the labels and adds the route to Caddy. When the container stops, the route is removed. No editing Caddyfile, no reloads — just labels.
The Nginx Config Complexity Comparison
Honest comparison of the same setup in both:
Caddy (reverse proxy + HTTPS + security headers + rate limiting):
app.yourdomain.com {
import security_headers
rate_limit { zone myzone { key {remote_host} events 60 window 1m } }
reverse_proxy app:8080
}
~5 lines (plus the snippet definition once).
Nginx equivalent:
- SSL certificate paths configured
- SSL protocols and ciphers block
- Security headers block (7+ lines)
- Rate limiting defined in
http {}block - Server block with location
proxy_set_headerdirectives- Certificate renewal cron job or certbot configuration
Not saying nginx is bad — it’s extremely capable and performant. But “can I read and understand this config six months from now” is a real operational question, and Caddy wins it.
Practical Multi-Service Setup
A realistic homelab Caddyfile managing multiple services:
{
email your@email.com
}
(common) {
import security_headers
encode gzip
}
(authelia_auth) {
forward_auth authelia:9091 {
uri /api/verify?rd=https://auth.yourdomain.com
copy_headers Remote-User Remote-Groups Remote-Email
}
}
auth.yourdomain.com {
reverse_proxy authelia:9091
}
jellyfin.yourdomain.com {
import common
reverse_proxy jellyfin:8096
}
*.internal.yourdomain.com {
tls { dns cloudflare {env.CLOUDFLARE_API_TOKEN} }
import authelia_auth
import common
@grafana host grafana.internal.yourdomain.com
handle @grafana { reverse_proxy grafana:3000 }
@portainer host portainer.internal.yourdomain.com
handle @portainer { reverse_proxy portainer:9000 }
}
One file. Multiple services. Automatic HTTPS everywhere. Auth on internal services. Secure headers for all. Readable in six months.
That’s the pitch for Caddy, and it holds up.