# Fix Nginx Worker Connections Are Not Enough Under DDoS Load

During a traffic spike or DDoS attack, you might see this error in your Nginx logs:

bash
2026/04/08 09:15:22 [alert] 3421#3421: 1024 worker_connections are not enough
2026/04/08 09:15:22 [error] 3421#3421: *99876 connect() failed (111: Connection refused) while connecting to upstream, client: 198.51.100.17

This means Nginx has exhausted its per-worker connection limit and cannot accept new client connections. Under normal traffic, the default of 1024 is usually sufficient. Under attack or viral traffic, it becomes a bottleneck within seconds.

Understanding the Connection Math

Each Nginx worker process can handle at most worker_connections simultaneous connections. The total capacity is:

bash
total_connections = worker_processes * worker_connections

But the real formula depends on what each connection is doing. When Nginx acts as a reverse proxy, each client connection also opens an upstream connection, consuming two slots from the pool. So the effective capacity is halved:

bash
effective_capacity = (worker_processes * worker_connections) / 2

With the default 1 worker and 1024 connections, you can handle only 512 proxied requests simultaneously.

Step 1: Diagnose the Current Limit

Check your current Nginx configuration:

bash
nginx -T 2>/dev/null | grep worker_connections

And check current connection counts:

bash
ss -s
ss -an | grep :80 | wc -l
ss -an | grep :443 | wc -l

If the number of established connections approaches your limit, you are running out of headroom.

Step 2: Increase Worker Connections

Edit /etc/nginx/nginx.conf:

```nginx events { worker_connections 4096; multi_accept on; use epoll; }

worker_processes auto; ```

Setting multi_accept on tells each worker to accept all pending connections at once rather than one at a time. The use epoll directive ensures Linux uses the most efficient event notification mechanism.

Step 3: Raise System File Descriptor Limits

Increasing worker_connections alone is not enough. The operating system imposes its own limits. Check the current limit:

bash
cat /proc/$(cat /run/nginx.pid)/limits | grep "Max open files"

You will likely see 1024 soft and 4096 hard. You need to raise both. Edit /etc/nginx/nginx.conf:

nginx
worker_rlimit_nofile 65535;

Then edit /etc/security/limits.conf:

bash
nginx soft nofile 65535
nginx hard nofile 65535

For systemd-based systems, also edit the service override:

bash
sudo systemctl edit nginx

Add:

bash
[Service]
LimitNOFILE=65535

Step 4: Optimize Connection Handling

```nginx http { upstream backend { server 127.0.0.1:8080; keepalive 32; }

server { location / { proxy_pass http://backend; proxy_http_version 1.1; proxy_set_header Connection ""; }

limit_req_zone $binary_remote_addr zone=per_ip:10m rate=10r/s;

location /api/ { limit_req zone=per_ip burst=20 nodelay; proxy_pass http://backend; }

limit_conn_zone $binary_remote_addr zone=addr:10m;

location /downloads/ { limit_conn addr 5; } } } ```

The keepalive 32 directive reuses upstream connections instead of opening new ones for each request. Rate limiting and connection limiting protect against DDoS by capping per-IP traffic.

Step 5: Verify the Configuration

bash
sudo nginx -t
sudo systemctl reload nginx

Then stress-test to verify the new limits:

bash
sudo apt install wrk
wrk -t12 -c200 -d30s http://your-server/

Monitor during the test:

bash
watch -n 1 'ss -s | head -5'

DDoS-Specific Hardening

Beyond connection limits, add these defenses:

```nginx geo $blocked { default 0; 198.51.100.0/24 1; 203.0.113.0/24 1; }

server { if ($blocked) { return 444; }

client_header_buffer_size 1k; large_client_header_buffers 2 1k; client_body_timeout 10s; client_header_timeout 10s; keepalive_timeout 15s; send_timeout 10s; } ```

The return 444 closes the connection without sending any response, which is effective against scanners and simple DDoS tools. Aggressive timeouts defeat Slowloris-style attacks that hold connections open indefinitely.

Monitoring Setup

Add these checks to your monitoring system:

```bash curl -s http://127.0.0.1/nginx_status

# Active connections: 1247 # server accepts handled requests # 89432 89432 156234 # Reading: 12 Writing: 89 Waiting: 1146 ```

If Reading or Writing consistently exceeds 80% of your worker_connections, it is time to scale horizontally or add a CDN/WAF in front of Nginx.