The Config File Nobody Explains Properly
You copied an nginx config from Stack Overflow. It worked. You have no idea why. Three months later something breaks and you’re staring at a wall of directives wondering which one to blame.
This is that article. The one that explains what’s actually happening so you’re not flying blind.
The Mental Model: Blocks Inside Blocks
Nginx config is a hierarchy of contexts. Everything you write lives inside one of these:
# Main context — global settingsworker_processes auto;
events { # How nginx handles connections worker_connections 1024;}
http { # All HTTP traffic config lives here include mime.types; sendfile on;
server { # One virtual host listen 80; server_name example.com;
location / { # What to do with this URL path root /var/www/html; } }}Main → Events → HTTP → Server → Location. Each one inherits from its parent but can override. Simple enough, until you have four server blocks and three location blocks fighting over the same request.
server_name Matching: Why _ Catches Everything
When nginx gets a request, it picks a server block based on the Host header. The matching order:
- Exact match (
server_name example.com;) - Wildcard prefix (
server_name *.example.com;) - Wildcard suffix (
server_name example.*;) - Regex match (
server_name ~^www\.(.+)\.com$;) - Default server (first listed, or
default_serverflag)
The underscore _ isn’t magic — it’s just a hostname that will never match any real request, making it a reliable catch-all for the default block:
server { listen 80 default_server; server_name _; return 444; # Drop the connection — no response}Use this to silently reject requests with no matching Host header. Bots and scanners love hitting your server IP directly — this shuts them down.
Location Block Matching: The Precedence Nobody Gets Right
This is where configs go wrong. Location matching has a priority system that does not run top-to-bottom:
| Modifier | Type | Priority |
|---|---|---|
= | Exact match | Highest — wins immediately |
^~ | Prefix (no regex) | Stops regex search if matched |
~ | Regex (case-sensitive) | Evaluated in order |
~* | Regex (case-insensitive) | Evaluated in order |
| none | Prefix | Lowest — longest match wins |
The gotcha everyone hits: regex locations beat plain prefix locations, regardless of order in the file. So this config does not do what it looks like:
# WRONG — the regex will match /images/ before this prefix doeslocation /images/ { root /var/www/static;}
location ~* \.(jpg|png|gif|webp)$ { expires 30d; add_header Cache-Control "public";}Fix it with ^~ to tell nginx “match this prefix and stop looking for regex”:
# CORRECT — ^~ prevents regex locations from stealing /images/ requestslocation ^~ /images/ { root /var/www/static; expires 30d; add_header Cache-Control "public";}proxy_pass: The Trailing Slash Trap
Proxying to an upstream app looks simple until you realize the trailing slash completely changes behavior:
# WITHOUT trailing slash — URI passed as-is# Request: /api/users → upstream gets /api/userslocation /api/ { proxy_pass http://backend;}
# WITH trailing slash — /api/ is stripped from the URI# Request: /api/users → upstream gets /userslocation /api/ { proxy_pass http://backend/;}Neither is wrong — they’re different behaviors. Know which one your app expects.
The headers your backend also needs:
location / { proxy_pass http://backend; proxy_http_version 1.1;
# Pass the real host, not localhost proxy_set_header Host $host;
# Pass the real client IP proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support (harmless if you don't need it) proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade";
# Don't let slow apps kill your worker processes proxy_connect_timeout 10s; proxy_send_timeout 30s; proxy_read_timeout 30s;}Without X-Forwarded-For, your app logs will show nginx’s IP for every request. Your 2 AM self debugging production will not appreciate this.
Static Files: root vs alias
Both serve static files. Both will silently do the wrong thing if you mix them up.
root appends the location prefix to the path. alias replaces it.
# root: request /static/logo.png → serves /var/www/html/static/logo.pnglocation /static/ { root /var/www/html;}
# alias: request /static/logo.png → serves /var/www/assets/logo.png# Note: alias needs the trailing slashlocation /static/ { alias /var/www/assets/;}The classic pattern for single-page apps — serve index.html for any route that doesn’t match a real file:
location / { root /var/www/app; try_files $uri $uri/ /index.html;}try_files checks each path left to right. $uri checks if the file exists. $uri/ checks for a directory. /index.html is the fallback. This is the correct pattern — not try_files $uri $uri/ =404 if you’re running a SPA.
Gzip: Compress Wisely
Enable gzip, but not for everything:
http { gzip on; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_min_length 1000;
# Text formats worth compressing gzip_types text/plain text/css text/javascript application/javascript application/json application/xml image/svg+xml;
# Do NOT add: image/jpeg image/png image/webp video/mp4 # Already compressed — you'll waste CPU and make files larger}JPEG, PNG, WebP, MP4, and ZIP are already compressed. Running gzip on them burns CPU for zero gain. Sometimes it produces slightly larger output. Leave them alone.
Rate Limiting: Stop the Hammering
Define the zone in the http block, use it in location:
http { # 10MB zone, 10 req/sec per IP limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
server { location /api/ { # Allow burst of 20 requests, process immediately (nodelay) limit_req zone=api burst=20 nodelay; proxy_pass http://backend; } }}Without nodelay, nginx queues burst requests and drips them out at the rate limit. With nodelay, burst requests go through immediately but count against the burst budget. For APIs, nodelay is almost always what you want — users don’t want a queued response, they want a fast 429.
Stack Overflow Configs That Are Wrong
Wrong: add_header in multiple blocks
# This DOES NOT merge headers — child block replaces parent headers entirelyhttp { add_header X-Frame-Options SAMEORIGIN;
server { add_header X-Content-Type-Options nosniff; # X-Frame-Options is now gone in this server block }}Fix: put all your add_header directives in one place, or repeat them everywhere they’re needed.
Wrong: worker_processes 4; on a 2-core machine
Set it to auto. Nginx will match your CPU count. Hardcoding a higher number doesn’t give you more performance — it adds context-switching overhead.
Wrong: Missing include mime.types;
Without this, nginx serves everything as application/octet-stream. Your CSS and JS will download instead of execute. This one is fun to debug at midnight.
Docker Compose: Nginx as Reverse Proxy
The practical setup. Your app containers don’t need exposed ports — nginx handles all traffic:
services: nginx: image: nginx:alpine ports: - "80:80" - "443:443" volumes: - ./nginx.conf:/etc/nginx/nginx.conf:ro - ./certs:/etc/nginx/certs:ro depends_on: - app
app: image: your-app:latest # No ports exposed — nginx reaches it via Docker network expose: - "3000"
api: image: your-api:latest expose: - "8080"events { worker_connections 1024;}
http { upstream app_backend { server app:3000; }
upstream api_backend { server api:8080; }
server { listen 80; server_name example.com;
location / { proxy_pass http://app_backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; }
location /api/ { proxy_pass http://api_backend/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }}In Docker Compose, service names are DNS hostnames. app:3000 works because they’re on the same network. No need to know IP addresses.
Test Before You Reload
The one command to run before every config change:
nginx -tThat’s it. It parses your config, validates it, and tells you exactly where you messed up — before reloading and taking your site down.
In Docker:
docker exec nginx nginx -t && docker exec nginx nginx -s reloadOnly reload if the test passes. Chain them with &&. This is not optional.
The Payoff
Nginx config stops being scary once you understand the hierarchy. Main block handles workers. Events handles connections. HTTP handles web behavior. Server blocks pick the right virtual host. Location blocks route the request.
Every directive has a context where it belongs. Every match rule has a priority. The trailing slash either matters or it doesn’t — now you know which.
Save this. You’ll open it again.