Skip to content
Go back

Linux File Descriptor Limits: When 1024 Isn't Enough

By SumGuy 5 min read
Linux File Descriptor Limits: When 1024 Isn't Enough

You’ve probably seen that cryptic error before: “Too many open files.” It shows up at 3 AM, your app stops accepting connections, and you have no idea what’s happening. Welcome to the world of file descriptors.

Here’s the thing: Linux treats everything as a file. Sockets. Network connections. Regular files. Pipes. Devices. All of it. Every one of these gets a number: a file descriptor (fd). And by default, each process is allowed to have exactly 1024 of them. That limit exists for reasons that made sense in 1990, but it’ll wreck you if you’re running a web server handling thousands of concurrent connections.

What Are File Descriptors, Really?

A file descriptor is just an index into a process’s file descriptor table. When you open a file, Linux assigns it a number (usually the next available one, starting from 0). Your process uses that number to refer to the file later.

Terminal window
$ ls -la /proc/$$/fd | head -n 10
total 0
dr-x------ 2 user user 0 Jan 17 10:45 .
dr-xr-xr-x 9 user user 0 Jan 17 10:45 ..
lrwx------ 1 user user 64 Jan 17 10:45 0 -> /dev/pts/1
lrwx------ 1 user user 64 Jan 17 10:45 1 -> /dev/pts/1
lrwx------ 1 user user 64 Jan 17 10:45 2 -> /dev/pts/1
lrwx------ 1 user user 64 Jan 17 10:45 3 -> /tmp (deleted)

See those first three? 0 is stdin, 1 is stdout, 2 is stderr. Every process gets those by default. Everything else you explicitly open takes the next available number.

The Default Limits (Soft and Hard)

There are actually two limits: soft and hard.

Check your current limits:

Terminal window
$ ulimit -n
1024
$ ulimit -Hn
65536

That -n flag means “open files.” (Yes, the naming is terrible.)

How to Raise Them (The Right Way)

There are three places to change this, depending on what you’re doing:

1. For Your Current Bash Session (Temporary)

Terminal window
ulimit -n 65536

This only affects your current shell and its children. Restart, and you’re back to 1024.

2. For a Specific User (Semi-Permanent)

Edit /etc/security/limits.conf:

/etc/security/limits.conf
# Add at the bottom
username soft nofile 65536
username hard nofile 65536
# Or for all users in a group
@groupname soft nofile 65536
@groupname hard nofile 65536

This persists across logins. You need to log out and back in for it to take effect.

Terminal window
# Check it worked
$ su - username
$ ulimit -n
65536

3. For a Systemd Service (The Best Way)

If you’re running an app as a systemd service, set the limit in the service file:

/etc/systemd/system/myapp.service
[Service]
LimitNOFILE=65536
LimitNPROC=32768
Restart=always

Reload and restart:

Terminal window
sudo systemctl daemon-reload
sudo systemctl restart myapp

This is the cleanest approach because it’s explicit, versioned, and doesn’t affect other processes.

Diagnosing the Problem

Before you raise limits, figure out why you’re hitting them. A few tools help:

Check current usage:

Terminal window
$ cat /proc/$(pgrep myapp)/limits | grep files
Limit Soft Limit Hard Limit Units
Max open files 1024 65536 files
$ cat /proc/$(pgrep myapp)/fd | wc -l
943

That second command counts how many fds the process currently has open.

Find which process is the culprit:

Terminal window
$ lsof -p $(pgrep myapp) | wc -l
943

Or see what files it has open:

Terminal window
$ lsof -p $(pgrep myapp) | head -n 20

Watch it in real-time:

Terminal window
$ watch -n 1 "cat /proc/\$(pgrep myapp)/fd | wc -l"

Why Did I Hit This?

Common culprits:

  1. Web server handling thousands of concurrent connections. Nginx, Apache, Node.js — each connection burns an fd.
  2. Database connections. Connection pools that don’t close cleanly. Every open db connection is an fd.
  3. File operations not closed. Developer leaves files open in a loop. Burns one fd per iteration.
  4. Unix domain sockets. If your app creates a socket per client, you’ll exhaust fds fast.
  5. Logging to disk. Each file handle is an fd. Thousands of log files = lots of fds.

The Numbers

1024 was fine when servers had 128 MB of RAM and handles 50 concurrent users. Today? Useless.

Modern systems let you safely raise it to 65536 (or higher). At that point, you’re no longer fd-limited; you hit memory limits first. For most servers, 65536 is more than enough. For massive scale (100k+ concurrent conns), you’ll need smarter architectures anyway (connection pooling, load balancing, etc.).

Key Takeaway

File descriptor exhaustion is a silent killer because it doesn’t fail loudly — your app just stops accepting new connections. Check your limits, raise them to match your workload, and use systemd’s LimitNOFILE to make it declarative. Your future self at 3 AM will thank you.


Share this post on:

Send a Webmention

Written about this post on your own site? Send a webmention and it may appear here.


Previous Post
The .dockerignore File You're Not Writing
Next Post
Why Your Docker Logs Are Eating Your Disk

Related Posts