# HAProxy Queue Overflow: Fix Request Queue Buildup

Queue overflow in HAProxy occurs when incoming requests exceed the backend's capacity to process them. Requests queue up waiting for an available server, leading to timeouts, 503 errors, and poor user experience. Understanding how to diagnose and fix queue overflow is critical for maintaining application availability.

Recognizing Queue Overflow

Symptoms in Logs

You'll see these error messages:

bash
[WARNING] 123/145023 (1234) : Server web_servers/web1 is DOWN, reason: Layer4 connection problem.
[WARNING] 123/145023 (1234) : backend 'web_servers' has no server available!

Clients receive:

``` HTTP/1.1 503 Service Unavailable Retry-After: 5

<html><body><h1>503 Service Unavailable</h1> Queue is full. Please try again later. </body></html> ```

Symptoms in Statistics

Check the HAProxy stats:

bash
echo "show stat" | socat stdio /var/run/haproxy.sock

Key columns to monitor:

ColumnDescriptionProblem Indicator
qcurCurrent queued requests> 0 indicates backlog
qmaxMax queued requestsHigh values show peak congestion
qlenQueue length limitQueue at limit = overflow
scurCurrent sessionsNear smax or slim
econConnection errorsHigh values indicate problems

Real-time Queue Monitoring

bash
# Watch queue metrics
watch -n 1 'echo "show stat" | socat stdio /var/run/haproxy.sock | grep web_servers'

Root Causes of Queue Overflow

Cause 1: Backend Server Capacity Insufficient

Backend servers can't handle the request volume.

Diagnose:

```bash # Check backend server load ssh user@web1 'uptime; free -m; iostat -x 1 5'

# Check backend server connections ssh user@web1 'ss -s; netstat -an | grep :80 | wc -l'

# Check backend response times curl -w "Time: %{time_total}s\n" -o /dev/null -s http://web1/ ```

Solution - Increase backend capacity:

Add more servers:

haproxy
backend web_servers
    balance roundrobin
    server web1 192.168.1.10:80 check maxconn 200
    server web2 192.168.1.11:80 check maxconn 200
    server web3 192.168.1.12:80 check maxconn 200  # New server
    server web4 192.168.1.13:80 check maxconn 200  # New server

Cause 2: maxconn Too Low

HAProxy's global maxconn limits total concurrent connections.

Problem:

```haproxy global maxconn 1000 # Too low for production

defaults maxconn 500 ```

When maxconn is reached, new connections are queued or rejected.

Solution:

Calculate appropriate maxconn:

bash
maxconn = (RAM - System RAM) / (Buffer Size + Kernel Buffer)

For 8GB RAM server:

```haproxy global # Reserve ~2GB for system, use 6GB for HAProxy # Each connection uses ~16KB buffer + ~8KB kernel maxconn 256000 tune.bufsize 16384 tune.maxrewrite 1024

defaults maxconn 64000 ```

Also set system limits:

bash
# /etc/security/limits.conf
haproxy soft nofile 262144
haproxy hard nofile 262144
bash
# /etc/sysctl.conf
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535
net.core.netdev_max_backlog = 65535

Apply:

bash
sysctl -p

Cause 3: Server maxconn Too Conservative

Per-server maxconn limits concurrent connections to each backend.

Problem:

haproxy
backend web_servers
    server web1 192.168.1.10:80 check maxconn 50  # Too low

Queues form when connections exceed 50, even if server can handle more.

Solution:

Determine optimal maxconn:

```bash # On backend server, check maximum connections ss -s cat /proc/sys/net/core/somaxconn

# Check nginx/apache max connections grep worker_connections /etc/nginx/nginx.conf ```

Set maxconn based on backend capacity:

haproxy
backend web_servers
    # Each server can handle 500 concurrent connections
    server web1 192.168.1.10:80 check maxconn 500
    server web2 192.168.1.11:80 check maxconn 500

Cause 4: No Queue Timeout

Without a queue timeout, requests wait indefinitely.

Problem:

haproxy
backend web_servers
    timeout server 30s
    server web1 192.168.1.10:80 check

No timeout queue means requests wait forever.

Solution:

Add queue timeout:

haproxy
backend web_servers
    timeout connect 5s
    timeout queue 30s
    timeout server 30s
    server web1 192.168.1.10:80 check maxconn 500

The timeout queue specifies how long a request waits in queue before returning 503.

Cause 5: Slow Backend Responses

Backend servers respond slowly, causing connections to pile up.

Diagnose:

bash
# Check backend response time from HAProxy
echo "show stat" | socat stdio /var/run/haproxy.sock | \
    awk -F',' 'NR>1 {print $1,$2,"trt:"$41,"ttime:"$39}'

High trt (average response time) and ttime (session time) indicate slow backends.

Solution:

Optimize backend applications or add caching:

```haproxy frontend http_front bind *:80

# Serve static content directly acl is_static path_end .css .js .png .jpg .gif .ico use_backend static_servers if is_static

default_backend app_servers

backend static_servers server static1 192.168.1.20:80 check # Add aggressive caching http-response set-header Cache-Control "public, max-age=31536000"

backend app_servers server app1 192.168.1.10:8080 check ```

Implementing Rate Limiting

Prevent queue overflow by limiting request rates.

Connection Rate Limiting

```haproxy frontend http_front bind *:80

# Track connection rate per source IP stick-table type ip size 100k expire 30s store conn_rate(10s),conn_cur

# Allow 20 new connections per 10 seconds http-request track-sc0 src http-request deny deny_status 429 if { sc0_conn_rate gt 20 }

default_backend web_servers ```

Request Rate Limiting

```haproxy frontend http_front bind *:80

# Track request rate per source IP stick-table type ip size 100k expire 30s store http_req_rate(10s)

# Allow 100 requests per 10 seconds http-request track-sc0 src http-request deny deny_status 429 if { sc0_http_req_rate gt 100 }

default_backend web_servers ```

Custom Rate Limit Response

```haproxy frontend http_front bind *:80

stick-table type ip size 100k expire 30s store http_req_rate(10s)

http-request track-sc0 src

# Custom error page for rate limiting errorfile 429 /etc/haproxy/errors/429-rate-limit.http

http-request deny deny_status 429 if { sc0_http_req_rate gt 100 }

default_backend web_servers ```

Create the error file:

```bash cat > /etc/haproxy/errors/429-rate-limit.http << 'EOF' HTTP/1.1 429 Too Many Requests Content-Type: text/html Cache-Control: no-cache Connection: close

<html> <head><title>429 Too Many Requests</title></head> <body> <h1>Too Many Requests</h1> <p>Please slow down. Try again in a few seconds.</p> </body> </html> EOF ```

Queue Management Strategies

Implement Request Prioritization

Prioritize important requests:

```haproxy frontend http_front bind *:80

# Mark API requests as priority acl is_api path_beg /api/ http-request set-priority-class high if is_api

default_backend web_servers ```

High priority requests are served before normal requests when queuing.

Use Connection Draining

When queues build up, gracefully drain connections:

```bash # Put server in drain mode echo "set server web_servers/web1 state drain" | socat stdio /var/run/haproxy.sock

# New connections go to other servers, existing connections complete ```

Add Queue Monitoring

Create an alert script:

```bash #!/bin/bash # /usr/local/bin/check-queue.sh

QUEUE_THRESHOLD=100

# Get total queued requests QUEUED=$(echo "show stat" | socat stdio /var/run/haproxy.sock | \ awk -F',' 'NR>1 && $3=="BACKEND" {sum+=$4} END {print sum}')

if [ "$QUEUED" -gt "$QUEUE_THRESHOLD" ]; then echo "CRITICAL: $QUEUED requests in queue (threshold: $QUEUE_THRESHOLD)" # Send alert echo "$QUEUED requests in queue" | mail -s "HAProxy Queue Alert" admin@example.com exit 2 fi

echo "OK: $QUEUED requests in queue" exit 0 ```

Emergency Response

When queue overflow occurs:

Step 1: Identify the problem

bash
echo "show stat" | socat stdio /var/run/haproxy.sock | grep -E "pxname|BACKEND"

Step 2: Scale up immediately

Add more backend capacity:

bash
# Enable backup servers
echo "set server web_servers/backup1 state ready" | socat stdio /var/run/haproxy.sock
echo "set server web_servers/backup2 state ready" | socat stdio /var/run/haproxy.sock

Step 3: Reduce load

Implement emergency rate limiting:

bash
# Add to running config
echo "http-request deny deny_status 503 if { random(0,100) lt 20 }" | \
    socat stdio /var/run/haproxy.sock

This randomly drops 20% of requests.

Verification

After implementing fixes:

```bash # Monitor queue length watch -n 1 'echo "show stat" | socat stdio /var/run/haproxy.sock | \ awk -F"," "NR==1 || \$3==\"BACKEND\" {print \$1,\$2,\"qcur:\"\$4,\"qmax:\"\$5}"'

# Check session distribution echo "show stat" | socat stdio /var/run/haproxy.sock | \ awk -F',' 'NR>1 && $3=="BACKEND" {print $1,$2,"scur:"$6,"smax:"$7}'

# Monitor in real-time tail -f /var/log/haproxy.log | grep -E "Queue|503" ```

Queue metrics should stabilize, with qcur returning to 0 during normal operation.