The Innocent Bug Report
User: “Uploading a 500MB file works great on my home network. When I upload through NGINX, it times out after 60 seconds.”
You check NGINX logs. The client closed the connection. You check your backend app logs — nothing. The request never made it.
NGINX killed it.
Here’s the thing: NGINX has timeout settings that don’t match your backend. And they’re poorly named. And most people don’t know they exist. So your uploads die silently.
The Timeouts That Matter
When you configure a reverse proxy, there are multiple timeout settings. NGINX has at least four:
client_body_timeout— how long the client can take to send the request bodyclient_header_timeout— how long for headersproxy_connect_timeout— how long to connect to the backendproxy_read_timeout— how long to wait for a response from the backendproxy_send_timeout— how long to send data to the backend
The one that kills uploads: client_body_timeout.
By default, it’s 60 seconds. If your upload takes longer than 60 seconds, NGINX closes the connection before the body is even fully sent to the backend.
Your app never sees it.
The Bad Config
server { listen 80; server_name example.com;
location /upload { proxy_pass http://backend:8080; }}A file upload to backend:8080 is happening. The user is uploading 500MB over a slow connection. It takes 2 minutes.
After 60 seconds, NGINX’s client_body_timeout fires. It closes the socket.
The client gets: Connection reset by peer
Your app sees: Nothing
The Fix
server { listen 80; server_name example.com;
# Increase all the timeouts client_body_timeout 300s; client_header_timeout 300s; proxy_connect_timeout 300s; proxy_send_timeout 300s; proxy_read_timeout 300s;
location /upload { proxy_pass http://backend:8080; }}Now uploads get 5 minutes (300 seconds). Plenty of time.
But wait, this is dumb. You don’t want a timeout that long everywhere. Most requests finish in under a second. You just want long uploads to work.
Better approach:
server { listen 80; server_name example.com;
# Default: 60 seconds (fine for most things) client_body_timeout 60s; client_header_timeout 60s; proxy_read_timeout 60s; proxy_send_timeout 60s;
# But uploads get more time location /upload { client_body_timeout 600s; proxy_send_timeout 600s; proxy_read_timeout 600s;
proxy_pass http://backend:8080; }
# API calls are usually fast location /api { client_body_timeout 30s; proxy_read_timeout 30s; proxy_send_timeout 30s;
proxy_pass http://backend:8080; }}Now uploads get 10 minutes, APIs get 30 seconds, everything else gets the default.
Buffering Issue
There’s another gotcha. NGINX buffers the request body. By default, bodies larger than 1MB are buffered to disk. This is fine. But:
location /upload { # If you set this too small, NGINX will reject the request outright client_max_body_size 5G;
client_body_timeout 600s; proxy_send_timeout 600s; proxy_read_timeout 600s;
proxy_pass http://backend:8080;}The client_max_body_size is the maximum size of the request body. By default it’s 1MB. If someone tries to upload 5GB, NGINX rejects it immediately with a 413 error.
Make sure this is larger than your largest expected upload.
Testing Your Timeouts
How do you verify this works? You can’t just upload a giant file in a test. Simulate slow uploads:
# Create a 100MB filedd if=/dev/zero of=testfile.bin bs=1M count=100
# Upload it slowly with curl# Limit upload speed to 100 KB/s (so it takes 1000 seconds)curl -F "file=@testfile.bin" \ --limit-rate 100K \ http://example.com/uploadThis will take forever, but you can cancel it early and verify it works.
For real testing, have a backend endpoint that deliberately sleeps:
# Flask example@app.route('/upload', methods=['POST'])def upload(): # Read the file slowly file = request.files['file'] time.sleep(60) # Simulate slow processing
return 'ok'Then upload something:
curl -F "file=@testfile.bin" http://localhost:5000/uploadIf your timeout is too short, you’ll see:
curl: (56) Recv error: Connection reset by peerIf it works, you’ll get ok.
HAProxy Version
If you’re using HAProxy instead of NGINX:
frontend http timeout client 60s
backend api timeout connect 5s timeout server 60s
backend upload timeout client 600s timeout connect 5s timeout server 600sSame concept — backend settings for uploads are longer.
Caddy Version
Caddy is smart about this. It inherits timeouts from the http directive:
http { timeout 60s}
{ reverse_proxy /upload* localhost:8080 { # Override just for this location timeout 600s }
reverse_proxy /api* localhost:8080 { timeout 30s }}The Real Test
Wait a week. If you haven’t gotten a complaint about upload timeouts, your config is probably ok.
But users will find the one weird edge case where they’re on WiFi with a 50 MB/s connection limit, uploading 100MB, and it takes exactly 119 seconds.
Then you get to debug it at 2 AM.
Bottom line: reverse proxy timeouts are invisible until they kill your uploads. Set them based on your worst-case scenario, not your best-case. And make sure your backend app has its own timeouts too — don’t rely solely on the proxy.