You’ve probably seen that cryptic error before: “Too many open files.” It shows up at 3 AM, your app stops accepting connections, and you have no idea what’s happening. Welcome to the world of file descriptors.
Here’s the thing: Linux treats everything as a file. Sockets. Network connections. Regular files. Pipes. Devices. All of it. Every one of these gets a number: a file descriptor (fd). And by default, each process is allowed to have exactly 1024 of them. That limit exists for reasons that made sense in 1990, but it’ll wreck you if you’re running a web server handling thousands of concurrent connections.
What Are File Descriptors, Really?
A file descriptor is just an index into a process’s file descriptor table. When you open a file, Linux assigns it a number (usually the next available one, starting from 0). Your process uses that number to refer to the file later.
$ ls -la /proc/$$/fd | head -n 10total 0dr-x------ 2 user user 0 Jan 17 10:45 .dr-xr-xr-x 9 user user 0 Jan 17 10:45 ..lrwx------ 1 user user 64 Jan 17 10:45 0 -> /dev/pts/1lrwx------ 1 user user 64 Jan 17 10:45 1 -> /dev/pts/1lrwx------ 1 user user 64 Jan 17 10:45 2 -> /dev/pts/1lrwx------ 1 user user 64 Jan 17 10:45 3 -> /tmp (deleted)See those first three? 0 is stdin, 1 is stdout, 2 is stderr. Every process gets those by default. Everything else you explicitly open takes the next available number.
The Default Limits (Soft and Hard)
There are actually two limits: soft and hard.
- Soft limit: What the process can actually hit right now. Defaults to 1024.
- Hard limit: The ceiling. Can’t raise the soft limit above this without root. Defaults to 65536 (or higher on modern systems).
Check your current limits:
$ ulimit -n1024$ ulimit -Hn65536That -n flag means “open files.” (Yes, the naming is terrible.)
How to Raise Them (The Right Way)
There are three places to change this, depending on what you’re doing:
1. For Your Current Bash Session (Temporary)
ulimit -n 65536This only affects your current shell and its children. Restart, and you’re back to 1024.
2. For a Specific User (Semi-Permanent)
Edit /etc/security/limits.conf:
# Add at the bottomusername soft nofile 65536username hard nofile 65536
# Or for all users in a group@groupname soft nofile 65536@groupname hard nofile 65536This persists across logins. You need to log out and back in for it to take effect.
# Check it worked$ su - username$ ulimit -n655363. For a Systemd Service (The Best Way)
If you’re running an app as a systemd service, set the limit in the service file:
[Service]LimitNOFILE=65536LimitNPROC=32768Restart=alwaysReload and restart:
sudo systemctl daemon-reloadsudo systemctl restart myappThis is the cleanest approach because it’s explicit, versioned, and doesn’t affect other processes.
Diagnosing the Problem
Before you raise limits, figure out why you’re hitting them. A few tools help:
Check current usage:
$ cat /proc/$(pgrep myapp)/limits | grep filesLimit Soft Limit Hard Limit UnitsMax open files 1024 65536 files
$ cat /proc/$(pgrep myapp)/fd | wc -l943That second command counts how many fds the process currently has open.
Find which process is the culprit:
$ lsof -p $(pgrep myapp) | wc -l943Or see what files it has open:
$ lsof -p $(pgrep myapp) | head -n 20Watch it in real-time:
$ watch -n 1 "cat /proc/\$(pgrep myapp)/fd | wc -l"Why Did I Hit This?
Common culprits:
- Web server handling thousands of concurrent connections. Nginx, Apache, Node.js — each connection burns an fd.
- Database connections. Connection pools that don’t close cleanly. Every open db connection is an fd.
- File operations not closed. Developer leaves files open in a loop. Burns one fd per iteration.
- Unix domain sockets. If your app creates a socket per client, you’ll exhaust fds fast.
- Logging to disk. Each file handle is an fd. Thousands of log files = lots of fds.
The Numbers
1024 was fine when servers had 128 MB of RAM and handles 50 concurrent users. Today? Useless.
Modern systems let you safely raise it to 65536 (or higher). At that point, you’re no longer fd-limited; you hit memory limits first. For most servers, 65536 is more than enough. For massive scale (100k+ concurrent conns), you’ll need smarter architectures anyway (connection pooling, load balancing, etc.).
Key Takeaway
File descriptor exhaustion is a silent killer because it doesn’t fail loudly — your app just stops accepting new connections. Check your limits, raise them to match your workload, and use systemd’s LimitNOFILE to make it declarative. Your future self at 3 AM will thank you.