Your high-traffic site was running fine until traffic spiked. Suddenly, Nginx starts returning 500 errors and the logs fill with "too many open files" messages. The server has plenty of RAM and CPU, yet Nginx can't handle more connections. This is a file descriptor limit issue, and it's fixable with proper configuration.
Understanding the Error
The error appears in /var/log/nginx/error.log:
2026/04/04 13:00:00 [crit] 1234#1234: *5678 open() "/var/www/html/index.html" failed (24: Too many open files)
2026/04/04 13:00:01 [alert] 1234#1234: *5679 socket() failed (24: Too many open files) while connecting to upstreamEvery open connection, log file, and static file requires a file descriptor. When Nginx hits the limit, it can't accept new connections or serve files.
Check current limits:
```bash # Check Nginx worker process limits cat /proc/$(pgrep -f "nginx: worker" | head -1)/limits | grep "open files"
# Or using prlimit prlimit --pid=$(pgrep -f "nginx: worker" | head -1) | grep NOFILE ```
Output showing the problem:
``
Max open files 1024 1024 files
Step 1: Check System-Wide Limits
Before adjusting Nginx, check the system limits:
```bash # Current shell limits ulimit -n
# Hard limits ulimit -Hn
# Soft limits ulimit -Sn
# System-wide maximum cat /proc/sys/fs/file-max ```
Typical default output:
``
1024 # Soft limit
65535 # Hard limit
2097152 # System max
The soft limit (1024) is what's causing the problem. Nginx can't exceed it without explicit configuration.
Step 2: Configure Nginx worker_rlimit_nofile
The primary fix is in nginx.conf:
```nginx # /etc/nginx/nginx.conf
worker_processes auto; worker_rlimit_nofile 65535; # Add this line
events { worker_connections 65535; use epoll; multi_accept on; }
http { # ... rest of config } ```
The worker_rlimit_nofile directive sets the maximum number of open files for each worker process. Set it to match or exceed your worker_connections.
Important relationship:
``
worker_rlimit_nofile >= worker_connections * 2
This accounts for both client connections and upstream/backend connections.
Apply changes:
nginx -t && systemctl reload nginxStep 3: Increase System Limits via systemd
On modern systems using systemd, Nginx limits are controlled by systemd, not shell limits. Edit the service override:
# Create override
systemctl edit nginxAdd:
[Service]
LimitNOFILE=65535
LimitNOFILESoft=65535Or directly edit:
mkdir -p /etc/systemd/system/nginx.service.d/
cat > /etc/systemd/system/nginx.service.d/limits.conf << 'EOF'
[Service]
LimitNOFILE=65535
LimitNOFILESoft=65535
EOFApply:
systemctl daemon-reload
systemctl restart nginxVerify:
```bash # Check systemd-applied limits systemctl show nginx | grep LimitNOFILE
# Check running process limits cat /proc/$(pgrep -f "nginx: worker" | head -1)/limits | grep "open files" ```
Step 4: Configure System-Wide Limits
For non-systemd systems or to set global defaults, edit /etc/security/limits.conf:
# Add to /etc/security/limits.conf
nginx soft nofile 65535
nginx hard nofile 65535
* soft nofile 65535
* hard nofile 65535For specific changes to apply to new sessions, also check /etc/security/limits.d/:
# Create a dedicated file
cat > /etc/security/limits.d/nginx.conf << 'EOF'
nginx soft nofile 65535
nginx hard nofile 65535
EOFNote: This requires re-login or service restart to take effect.
Step 5: Check PAM Configuration
On some systems, PAM limits may override your settings. Check:
# Ensure PAM session limits are enabled
grep -r "pam_limits" /etc/pam.d/You should see something like:
``
/etc/pam.d/common-session:session required pam_limits.so
/etc/pam.d/sshd:session required pam_limits.so
If missing, add:
echo "session required pam_limits.so" >> /etc/pam.d/common-sessionStep 6: Verify PAM for systemd Services
Systemd services don't use PAM by default. For systemd-managed Nginx:
```bash # Check if PAM is enabled in the service systemctl show nginx --property=PAMName
# If empty, PAM isn't being used ```
This is why the systemd LimitNOFILE approach (Step 3) is more reliable for modern systems.
Step 7: Calculate Appropriate Limits
Don't just set limits arbitrarily high. Calculate based on expected load:
```bash # Formula for worker_rlimit_nofile # = worker_connections * 2 (for proxy) + number of log files + static files
# Example calculation: # - 10,000 concurrent connections # - Proxy to upstream (doubles connections) # - 2 log files per server block # - Cache files (varies)
# Conservative estimate worker_rlimit_nofile = worker_connections * 2 + 1000
# For 10,000 connections: # worker_rlimit_nofile = 20000 + 1000 = 21000 ```
Also check your system can handle it:
```bash # Check current file descriptor usage cat /proc/sys/fs/file-nr
# Output: allocated free max # Example: 2048 512 2097152
# If free is very low, you're hitting limits ```
Increase system-wide maximum if needed:
```bash # Temporary echo 4194302 > /proc/sys/fs/file-max
# Permanent: add to /etc/sysctl.conf echo "fs.file-max = 4194302" >> /etc/sysctl.conf sysctl -p ```
Step 8: Monitor File Descriptor Usage
Set up monitoring to catch issues before they happen:
```bash # Check Nginx-specific usage ls -la /proc/$(pgrep -f "nginx: worker" | head -1)/fd | wc -l
# Watch in real-time watch -n 1 'ls /proc/$(pgrep -f "nginx: worker" | head -1)/fd 2>/dev/null | wc -l'
# System-wide usage cat /proc/sys/fs/file-nr
# Per-process usage (requires lsof) lsof -p $(pgrep -f "nginx: worker" | head -1) | wc -l ```
Step 9: Check for File Descriptor Leaks
If you've increased limits but still run out, you might have a leak:
```bash # Monitor open files over time for i in {1..10}; do echo "Check $i:" lsof -p $(pgrep -f "nginx: worker" | head -1) | wc -l sleep 60 done
# If count increases consistently, you have a leak ```
Common causes: - Keepalive connections not closing properly - Upstream connections not being released - Log files not being rotated - Cache files accumulating
Check connections:
```bash # Active connections ss -tnp | grep nginx | wc -l
# Connections in various states ss -tnp state established '( dport = :443 or dport = :80 )' | grep nginx | wc -l ss -tnp state time-wait | wc -l ```
Complete Configuration Example
A high-traffic Nginx configuration:
```nginx # /etc/nginx/nginx.conf
worker_processes auto; worker_rlimit_nofile 100000;
events { worker_connections 40000; use epoll; multi_accept on; }
http { # Connection optimizations keepalive_timeout 30; keepalive_requests 1000; reset_timedout_connection on;
# Upstream keepalive upstream backend { server 127.0.0.1:3000; keepalive 128; }
server { listen 80; location / { proxy_pass http://backend; proxy_http_version 1.1; proxy_set_header Connection ""; } } } ```
Systemd override:
# /etc/systemd/system/nginx.service.d/limits.conf
[Service]
LimitNOFILE=100000
LimitNOFILESoft=100000System-wide settings:
# /etc/sysctl.conf
fs.file-max = 4194302
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535Apply all:
sysctl -p
systemctl daemon-reload
systemctl restart nginxVerification Checklist
After making changes:
```bash # 1. Verify Nginx config syntax nginx -t
# 2. Check systemd limits systemctl show nginx | grep LimitNOFILE
# 3. Verify process limits cat /proc/$(pgrep -f "nginx: worker" | head -1)/limits | grep "open files"
# 4. Test under load ab -n 10000 -c 1000 http://localhost/
# 5. Monitor during test watch -n 1 'cat /proc/sys/fs/file-nr; lsof -p $(pgrep -f "nginx: worker" | head -1) 2>/dev/null | wc -l' ```
File descriptor limits are one of the most common scaling issues for high-traffic Nginx servers. Setting them correctly upfront prevents outages during traffic spikes.