Skip to content
Go back

Nginx: The Config That Makes Sense

By SumGuy 8 min read
Nginx: The Config That Makes Sense

The Config File Nobody Explains Properly

You copied an nginx config from Stack Overflow. It worked. You have no idea why. Three months later something breaks and you’re staring at a wall of directives wondering which one to blame.

This is that article. The one that explains what’s actually happening so you’re not flying blind.

The Mental Model: Blocks Inside Blocks

Nginx config is a hierarchy of contexts. Everything you write lives inside one of these:

nginx.conf
# Main context — global settings
worker_processes auto;
events {
# How nginx handles connections
worker_connections 1024;
}
http {
# All HTTP traffic config lives here
include mime.types;
sendfile on;
server {
# One virtual host
listen 80;
server_name example.com;
location / {
# What to do with this URL path
root /var/www/html;
}
}
}

Main → Events → HTTP → Server → Location. Each one inherits from its parent but can override. Simple enough, until you have four server blocks and three location blocks fighting over the same request.

server_name Matching: Why _ Catches Everything

When nginx gets a request, it picks a server block based on the Host header. The matching order:

  1. Exact match (server_name example.com;)
  2. Wildcard prefix (server_name *.example.com;)
  3. Wildcard suffix (server_name example.*;)
  4. Regex match (server_name ~^www\.(.+)\.com$;)
  5. Default server (first listed, or default_server flag)

The underscore _ isn’t magic — it’s just a hostname that will never match any real request, making it a reliable catch-all for the default block:

nginx.conf
server {
listen 80 default_server;
server_name _;
return 444; # Drop the connection — no response
}

Use this to silently reject requests with no matching Host header. Bots and scanners love hitting your server IP directly — this shuts them down.

Location Block Matching: The Precedence Nobody Gets Right

This is where configs go wrong. Location matching has a priority system that does not run top-to-bottom:

ModifierTypePriority
=Exact matchHighest — wins immediately
^~Prefix (no regex)Stops regex search if matched
~Regex (case-sensitive)Evaluated in order
~*Regex (case-insensitive)Evaluated in order
nonePrefixLowest — longest match wins

The gotcha everyone hits: regex locations beat plain prefix locations, regardless of order in the file. So this config does not do what it looks like:

nginx.conf
# WRONG — the regex will match /images/ before this prefix does
location /images/ {
root /var/www/static;
}
location ~* \.(jpg|png|gif|webp)$ {
expires 30d;
add_header Cache-Control "public";
}

Fix it with ^~ to tell nginx “match this prefix and stop looking for regex”:

nginx.conf
# CORRECT — ^~ prevents regex locations from stealing /images/ requests
location ^~ /images/ {
root /var/www/static;
expires 30d;
add_header Cache-Control "public";
}

proxy_pass: The Trailing Slash Trap

Proxying to an upstream app looks simple until you realize the trailing slash completely changes behavior:

nginx.conf
# WITHOUT trailing slash — URI passed as-is
# Request: /api/users → upstream gets /api/users
location /api/ {
proxy_pass http://backend;
}
# WITH trailing slash — /api/ is stripped from the URI
# Request: /api/users → upstream gets /users
location /api/ {
proxy_pass http://backend/;
}

Neither is wrong — they’re different behaviors. Know which one your app expects.

The headers your backend also needs:

nginx.conf
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
# Pass the real host, not localhost
proxy_set_header Host $host;
# Pass the real client IP
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support (harmless if you don't need it)
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Don't let slow apps kill your worker processes
proxy_connect_timeout 10s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
}

Without X-Forwarded-For, your app logs will show nginx’s IP for every request. Your 2 AM self debugging production will not appreciate this.

Static Files: root vs alias

Both serve static files. Both will silently do the wrong thing if you mix them up.

root appends the location prefix to the path. alias replaces it.

nginx.conf
# root: request /static/logo.png → serves /var/www/html/static/logo.png
location /static/ {
root /var/www/html;
}
# alias: request /static/logo.png → serves /var/www/assets/logo.png
# Note: alias needs the trailing slash
location /static/ {
alias /var/www/assets/;
}

The classic pattern for single-page apps — serve index.html for any route that doesn’t match a real file:

nginx.conf
location / {
root /var/www/app;
try_files $uri $uri/ /index.html;
}

try_files checks each path left to right. $uri checks if the file exists. $uri/ checks for a directory. /index.html is the fallback. This is the correct pattern — not try_files $uri $uri/ =404 if you’re running a SPA.

Gzip: Compress Wisely

Enable gzip, but not for everything:

nginx.conf
http {
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_min_length 1000;
# Text formats worth compressing
gzip_types
text/plain
text/css
text/javascript
application/javascript
application/json
application/xml
image/svg+xml;
# Do NOT add: image/jpeg image/png image/webp video/mp4
# Already compressed — you'll waste CPU and make files larger
}

JPEG, PNG, WebP, MP4, and ZIP are already compressed. Running gzip on them burns CPU for zero gain. Sometimes it produces slightly larger output. Leave them alone.

Rate Limiting: Stop the Hammering

Define the zone in the http block, use it in location:

nginx.conf
http {
# 10MB zone, 10 req/sec per IP
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
server {
location /api/ {
# Allow burst of 20 requests, process immediately (nodelay)
limit_req zone=api burst=20 nodelay;
proxy_pass http://backend;
}
}
}

Without nodelay, nginx queues burst requests and drips them out at the rate limit. With nodelay, burst requests go through immediately but count against the burst budget. For APIs, nodelay is almost always what you want — users don’t want a queued response, they want a fast 429.

Stack Overflow Configs That Are Wrong

Wrong: add_header in multiple blocks

nginx.conf
# This DOES NOT merge headers — child block replaces parent headers entirely
http {
add_header X-Frame-Options SAMEORIGIN;
server {
add_header X-Content-Type-Options nosniff;
# X-Frame-Options is now gone in this server block
}
}

Fix: put all your add_header directives in one place, or repeat them everywhere they’re needed.

Wrong: worker_processes 4; on a 2-core machine

Set it to auto. Nginx will match your CPU count. Hardcoding a higher number doesn’t give you more performance — it adds context-switching overhead.

Wrong: Missing include mime.types;

Without this, nginx serves everything as application/octet-stream. Your CSS and JS will download instead of execute. This one is fun to debug at midnight.

Docker Compose: Nginx as Reverse Proxy

The practical setup. Your app containers don’t need exposed ports — nginx handles all traffic:

docker-compose.yml
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./certs:/etc/nginx/certs:ro
depends_on:
- app
app:
image: your-app:latest
# No ports exposed — nginx reaches it via Docker network
expose:
- "3000"
api:
image: your-api:latest
expose:
- "8080"
nginx.conf
events {
worker_connections 1024;
}
http {
upstream app_backend {
server app:3000;
}
upstream api_backend {
server api:8080;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://app_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /api/ {
proxy_pass http://api_backend/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}

In Docker Compose, service names are DNS hostnames. app:3000 works because they’re on the same network. No need to know IP addresses.

Test Before You Reload

The one command to run before every config change:

Terminal window
nginx -t

That’s it. It parses your config, validates it, and tells you exactly where you messed up — before reloading and taking your site down.

In Docker:

Terminal window
docker exec nginx nginx -t && docker exec nginx nginx -s reload

Only reload if the test passes. Chain them with &&. This is not optional.

The Payoff

Nginx config stops being scary once you understand the hierarchy. Main block handles workers. Events handles connections. HTTP handles web behavior. Server blocks pick the right virtual host. Location blocks route the request.

Every directive has a context where it belongs. Every match rule has a priority. The trailing slash either matters or it doesn’t — now you know which.

Save this. You’ll open it again.


Share this post on:

Send a Webmention

Written about this post on your own site? Send a webmention and it may appear here.


Previous Post
mTLS Explained: When Regular TLS Isn't Paranoid Enough
Next Post
Stable Diffusion vs ComfyUI vs Fooocus: AI Image Generation at Home

Related Posts