# Fix Nginx Worker Process Open File Descriptor Limit Exceeded

Under heavy load, Nginx workers start failing with mysterious errors:

bash
2026/04/08 17:30:22 [crit] 6789#6789: *123456 open() "/var/www/app/static/css/main.css" failed (24: Too many open files), client: 203.0.113.99

Error code 24 is EMFILE -- the process has reached its maximum number of open file descriptors. Each open file, socket connection, and log file consumes one file descriptor, and the default Linux limit is typically 1024 per process.

Calculating File Descriptor Usage

An Nginx worker uses file descriptors for: - Log files (access.log, error.log): 2 - Listening sockets (port 80, port 443): 2 - Each client connection: 1 - Each upstream connection: 1 - Each static file being served: 1

Under load with 1000 concurrent clients and 100 upstream connections, a single worker needs approximately 1104 file descriptors. The default limit of 1024 is insufficient.

Diagnosing Current Usage

Check the current limits:

```bash cat /run/nginx.pid cat /proc/$(cat /run/nginx.pid)/limits | grep "Max open files" ls /proc/$(cat /run/nginx.pid)/fd | wc -l

for pid in $(pgrep -f "nginx: worker"); do echo "Worker $pid: $(ls /proc/$pid/fd 2>/dev/null | wc -l) open files" done ```

Setting worker_rlimit_nofile

In /etc/nginx/nginx.conf, set the limit before the events block:

```nginx worker_rlimit_nofile 65535;

events { worker_connections 4096; multi_accept on; } ```

The rule of thumb: worker_rlimit_nofile should be at least worker_connections * 2 (for client + upstream connections) plus overhead for log files and listening sockets.

System-Level Limits

For systemd

bash
sudo systemctl edit nginx

Add:

bash
[Service]
LimitNOFILE=65535

For limits.conf

bash
sudo tee -a /etc/security/limits.conf << 'EOF'
nginx soft nofile 65535
nginx hard nofile 65535
EOF

Verify the Effective Limit

After restart:

bash
cat /proc/$(cat /run/nginx.pid)/limits | grep "Max open files"
# Should show: Max open files  65535  65535  files

System-Wide File Descriptor Limit

The system also has a global limit:

bash
cat /proc/sys/fs/file-max
cat /proc/sys/fs/file-nr

If file-nr (currently allocated) approaches file-max, increase it:

bash
echo 1000000 | sudo tee /proc/sys/fs/file-max

Make it persistent in /etc/sysctl.conf:

bash
fs.file-max = 1000000

Apply with sudo sysctl -p.

Finding Leaked File Descriptors

If file descriptor counts grow unexpectedly, there may be a leak. Inspect what descriptors are open:

bash
ls -la /proc/$(pgrep -f "nginx: worker" | head -1)/fd/

If you see thousands of sockets in CLOSE_WAIT state, the upstream server is not properly closing connections:

bash
ss -tnp | grep nginx | grep CLOSE_WAIT | wc -l

Fix by adjusting upstream keepalive settings:

nginx
upstream backend {
    server 127.0.0.1:8080;
    keepalive 16;
    keepalive_timeout 60s;
    keepalive_requests 100;
}

The keepalive_requests 100 closes and reopens the keepalive connection after 100 requests, preventing stale connections from accumulating.