During a traffic spike, your Nginx server starts returning 502 errors. The logs show "worker connections are not enough" or you're seeing connection drops. Your server has plenty of CPU and RAM, but Nginx just can't handle the concurrent connections. The issue is likely the worker_connections limit.

Let's understand and fix this configuration issue.

Understanding Worker Connections

Each Nginx worker process has a limit on concurrent connections. The error typically appears in the logs:

bash
2026/04/04 15:00:00 [alert] 1234#1234: *5678 1024 worker_connections are not enough while connecting to upstream

Or:

bash
2026/04/04 15:00:00 [warn] 1234#1234: accept() failed (24: Too many open files)

The total concurrent connections Nginx can handle is calculated as:

bash
Total connections = worker_processes x worker_connections

By default, this is typically auto x 1024 = ~4 x 1024 = ~4096 connections on a 4-core system.

Step 1: Check Current Configuration

View your current settings:

```bash # Check nginx.conf grep -E "worker_processes|worker_connections|worker_rlimit_nofile" /etc/nginx/nginx.conf

# Full events block grep -A 10 "^events" /etc/nginx/nginx.conf ```

Default configuration typically looks like:

```nginx worker_processes auto;

events { worker_connections 1024; } ```

Check what Nginx is actually using:

```bash # Show compiled-in defaults nginx -V 2>&1 | grep -o "worker_connections=[^ ]*"

# Check current process limits cat /proc/$(pgrep -f "nginx: worker" | head -1)/limits | grep "open files" ```

Step 2: Calculate Your Requirements

First, understand your actual connection usage:

```bash # Current connections per worker for pid in $(pgrep -f "nginx: worker"); do echo "Worker $pid: $(ls /proc/$pid/fd | wc -l) file descriptors" done

# Total established connections ss -tnp | grep nginx | wc -l

# Connections by state ss -tnp | grep nginx | awk '{print $1}' | sort | uniq -c

# Real-time monitoring watch -n 1 'ss -tnp | grep nginx | wc -l' ```

Calculate required connections:

``` Required connections = Peak concurrent users x Connections per user x Safety factor

Example: - 10,000 concurrent users - 2 connections per user (HTTP/1.1 with keepalive + maybe WebSocket) - 1.5 safety factor

Required = 10,000 x 2 x 1.5 = 30,000 connections ```

Step 3: Increase worker_connections

Update your Nginx configuration:

```nginx # /etc/nginx/nginx.conf

worker_processes auto; # One worker per CPU core worker_rlimit_nofile 65535; # File descriptor limit per worker

events { worker_connections 65535; # Max connections per worker multi_accept on; # Accept multiple connections at once use epoll; # Linux event model } ```

The relationship between directives:

``` worker_rlimit_nofile >= worker_connections

For proxy setups: worker_rlimit_nofile >= worker_connections x 2 ```

  1. 1.This is because each proxied connection uses:
  2. 2.Client-to-Nginx connection
  3. 3.Nginx-to-backend connection

Apply changes:

bash
nginx -t && systemctl reload nginx

Step 4: Update System Limits

The worker connections limit is bounded by system file descriptor limits:

```bash # Check current system limit cat /proc/sys/fs/file-max

# Check per-process limit ulimit -n ```

For systemd systems:

bash
# Create override
systemctl edit nginx

Add:

ini
[Service]
LimitNOFILE=65535
LimitNOFILESoft=65535

Apply:

bash
systemctl daemon-reload
systemctl restart nginx

For non-systemd systems:

Edit /etc/security/limits.conf:

bash
nginx soft nofile 65535
nginx hard nofile 65535

System-wide kernel limit:

```bash # Temporary sysctl -w fs.file-max=2097152

# Permanent: add to /etc/sysctl.conf echo "fs.file-max=2097152" >> /etc/sysctl.conf sysctl -p ```

Step 5: Optimize Connection Handling

Increasing limits is only half the solution. Optimize how connections are handled:

```nginx # /etc/nginx/nginx.conf

events { worker_connections 65535; multi_accept on; # Accept multiple connections at once use epoll; # Linux epoll for efficiency accept_mutex off; # Disable for high traffic (modern Nginx) }

http { # Keepalive settings keepalive_timeout 65; # Keep connections open keepalive_requests 1000; # Requests per connection keepalive_disable none; # Keepalive for all clients

# Upstream keepalive upstream backend { server 127.0.0.1:3000; keepalive 128; # Keep 128 connections to backend open }

server { location / { proxy_pass http://backend; proxy_http_version 1.1; proxy_set_header Connection "";

# Don't close client connections quickly proxy_read_timeout 60s; } } } ```

Step 6: Handle Connection States

Connections in TIME_WAIT state can consume your limit:

```bash # Check TIME_WAIT connections ss -tan | grep TIME_WAIT | wc -l

# Percentage of total ss -tan | awk '{print $1}' | sort | uniq -c | sort -rn ```

Reduce TIME_WAIT connections:

nginx
# In http block
reset_timedout_connection on;

System-level tuning:

```bash # Add to /etc/sysctl.conf net.ipv4.tcp_fin_timeout = 30 net.ipv4.tcp_tw_reuse = 1 net.core.somaxconn = 65535

# Apply sysctl -p ```

Step 7: Monitor Connection Usage

Set up monitoring to catch issues before they happen:

Nginx stub status:

```nginx server { listen 80; server_name localhost;

location /nginx_status { stub_status on; allow 127.0.0.1; deny all; } } ```

Check status:

bash
curl http://127.0.0.1/nginx_status

Output: `` Active connections: 291 server accepts handled requests 16630948 16630948 31070465 Reading: 6 Writing: 179 Waiting: 106

Parse for monitoring:

```bash # Get active connections curl -s http://127.0.0.1/nginx_status | head -1 | awk '{print $3}'

# Get connections waiting curl -s http://127.0.0.1/nginx_status | grep "Waiting" | awk '{print $3}' ```

Create a monitoring script:

```bash #!/bin/bash # /usr/local/bin/check_nginx_connections.sh

STATUS=$(curl -s http://127.0.0.1/nginx_status) ACTIVE=$(echo "$STATUS" | head -1 | awk '{print $3}') WORKERS=$(pgrep -f "nginx: worker" | wc -l) MAX_CONN=$((WORKERS * 65535)) # Your worker_connections value USAGE=$((ACTIVE * 100 / MAX_CONN))

if [ $USAGE -gt 80 ]; then echo "WARNING: Connection usage at ${USAGE}%" exit 1 fi

echo "OK: Connection usage at ${USAGE}%" exit 0 ```

Step 8: Handle Traffic Spikes

For sudden traffic increases:

```nginx # Rate limiting to protect backend http { limit_conn_zone $binary_remote_addr zone=conn_limit:10m;

server { location / { limit_conn conn_limit 100; # 100 concurrent connections per IP proxy_pass http://backend; } } } ```

Queue connections:

nginx
server {
    listen 80 backlog=65535;  # Increase listen queue
    location / {
        proxy_pass http://backend;
    }
}

System listen queue:

```bash # Check current cat /proc/sys/net/core/somaxconn

# Increase sysctl -w net.core.somaxconn=65535

# Permanent echo "net.core.somaxconn=65535" >> /etc/sysctl.conf ```

Step 9: Distribute Load

If one server can't handle the traffic:

```nginx # Multiple backends upstream backend { least_conn; # Route to server with fewest connections server 192.168.1.10:3000 weight=3; server 192.168.1.11:3000 weight=2; server 192.168.1.12:3000 backup;

keepalive 64; } ```

Or scale horizontally with multiple Nginx instances behind a load balancer.

Complete High-Concurrency Configuration

```nginx # /etc/nginx/nginx.conf

user nginx; worker_processes auto; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid;

events { worker_connections 65535; multi_accept on; use epoll; }

http { include /etc/nginx/mime.types; default_type application/octet-stream;

# Connection handling keepalive_timeout 65; keepalive_requests 1000; keepalive_disable none; reset_timed_connection on;

# Buffers sendfile on; tcp_nopush on; tcp_nodelay on;

# Upstream upstream backend { least_conn; server 127.0.0.1:3000; keepalive 128; }

server { listen 80 backlog=65535; server_name example.com;

location / { proxy_pass http://backend; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host $host; proxy_connect_timeout 60s; proxy_read_timeout 60s; }

location /nginx_status { stub_status on; allow 127.0.0.1; deny all; } } } ```

System configuration (/etc/sysctl.conf):

bash
fs.file-max = 2097152
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 65535
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 1024 65535

Systemd override (/etc/systemd/system/nginx.service.d/limits.conf):

ini
[Service]
LimitNOFILE=100000
LimitNOFILESoft=100000

Apply all:

bash
sysctl -p
systemctl daemon-reload
systemctl restart nginx

Verification Checklist

```bash # 1. Test configuration nginx -t

# 2. Check worker connections setting nginx -T 2>&1 | grep worker_connections

# 3. Verify process limits cat /proc/$(pgrep -f "nginx: worker" | head -1)/limits | grep "open files"

# 4. Monitor connections curl http://127.0.0.1/nginx_status

# 5. Load test ab -n 10000 -c 1000 http://localhost/

# 6. Watch during load test watch -n 1 'curl -s http://127.0.0.1/nginx_status' ```

Worker connections configuration is about matching Nginx settings to your system limits and traffic requirements. Always verify that worker_rlimit_nofile and system limits support your worker_connections value.