What's Actually Happening

You configured rate limiting in Nginx to protect your application from abuse, but requests are not being limited as expected. Either no 429 errors are returned, or legitimate users are being blocked while abusers get through.

The Error You'll See

Rate limit not enforced:

```bash # Configured limit: limit_req zone=mylimit burst=10 nodelay;

# But requests still flood through: $ ab -n 1000 -c 100 https://example.com/api/

Complete requests: 1000 Failed requests: 0 # Should have 429 errors! Time per request: 45.2 ms ```

Rate limit logs empty:

```bash $ tail -f /var/log/nginx/error.log

# No rate limit messages: # Expected: limiting requests, excess: 5.3 by zone "mylimit" # Actual: nothing ```

Or wrong clients blocked:

```bash # Legitimate users see: HTTP/1.1 429 Too Many Requests

# But abusers get through: # Rate limit not applied to their IPs ```

Why This Happens

  1. 1.Zone not defined - limit_req_zone missing in http block
  2. 2.Zone not shared - Different worker processes use separate counters
  3. 3.Location not matched - Rate limit in wrong location block
  4. 4.Proxy protocol issue - Real client IP behind load balancer
  5. 5.Burst too high - Burst value allows too many requests
  6. 6.Key wrong - Using $binary_remote_addr behind CDN
  7. 7.Module not loaded - ngx_http_limit_req_module missing
  8. 8.Config syntax error - limit_req_zone before zone definition

Step 1: Verify Rate Limit Module Loaded

```bash # Check Nginx modules: nginx -V 2>&1 | grep -o with-http_limit_req_module

# Expected output: with-http_limit_req_module

# If not present, install module: # Ubuntu/Debian: sudo apt-get install libnginx-mod-http-limit-req

# RHEL/CentOS: sudo yum install nginx-module-limit-req

# Load module in nginx.conf: load_module modules/ngx_http_limit_req_module.so;

# Verify syntax: sudo nginx -t

# Check module is active: nginx -V 2>&1 | tr ' ' '\n' | grep limit_req

# Test basic rate limit: sudo nginx -t && sudo systemctl reload nginx ```

Step 2: Configure Rate Limit Zone Correctly

```nginx # /etc/nginx/nginx.conf

http { # Define zone FIRST in http block: # Syntax: limit_req_zone $key zone=name:size rate=rate;

# Basic zone (per IP): limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;

# Zone for API: limit_req_zone $binary_remote_addr zone=api_limit:10m rate=100r/m;

# Zone per URI (more granular): limit_req_zone $binary_remote_addr zone=per_uri:10m rate=5r/s;

# Zone for authenticated users: limit_req_zone $http_x_user_id zone=auth_limit:10m rate=30r/s;

# Log level for rate limit events: limit_req_log_level warn;

# Return code for rate limited requests: limit_req_status 429;

# Include sites: include /etc/nginx/sites-enabled/*; }

# Zone size calculation: # 1MB can hold ~16,000 IP addresses (using $binary_remote_addr) # Formula: zone_size = expected_clients * 64 bytes / 1MB # Example: 100,000 clients = 100,000 * 64 / 1,048,576 = ~6MB # Use 10MB for safety

# Verify zone is created: sudo nginx -t # Test configuration syntax

# Check zone in memory: ls -la /var/run/nginx/ # Look for zone files or shared memory ```

Step 3: Apply Rate Limit in Correct Location

```nginx # /etc/nginx/sites-available/example.com

server { listen 80; server_name example.com;

# WRONG: Rate limit in server block (applies to all locations) # limit_req zone=mylimit; # Don't do this

# CORRECT: Apply in specific location location /api/ { # Apply rate limit here: limit_req zone=api_limit burst=20 nodelay;

proxy_pass http://backend; }

# No rate limit for static files: location /static/ { # No limit_req here alias /var/www/static/; }

# Different limit for login: location /login { limit_req zone=mylimit burst=5 nodelay;

proxy_pass http://backend; }

# Rate limit with custom response: location /search { limit_req zone=mylimit burst=10 nodelay;

# Custom error page: error_page 429 = @rate_limited;

proxy_pass http://backend; }

location @rate_limited { default_type application/json; return 429 '{"error": "Rate limit exceeded", "retry_after": 60}'; } }

# Test configuration: sudo nginx -t

# Common mistake - limit in if block: location /api/ { # WRONG: limit_req in if doesn't work! if ($request_method = POST) { limit_req zone=mylimit; # This will NOT work }

# CORRECT: Use map instead: # In http block: map $request_method $limit_key { default $binary_remote_addr; POST $binary_remote_addr; # or different key }

limit_req_zone $limit_key zone=post_limit:10m rate=5r/s;

# Then in location: limit_req zone=post_limit; } ```

Step 4: Handle Client IP Behind Proxy/CDN

```nginx # /etc/nginx/nginx.conf

http { # Set real IP from trusted proxies: set_real_ip_from 10.0.0.0/8; set_real_ip_from 172.16.0.0/12; set_real_ip_from 192.168.0.0/16; set_real_ip_from 103.21.244.0/22; # Cloudflare set_real_ip_from 103.22.200.0/22; set_real_ip_from 103.31.4.0/22; set_real_ip_from 104.16.0.0/13; set_real_ip_from 104.24.0.0/14; set_real_ip_from 108.162.192.0/18; set_real_ip_from 131.0.72.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 162.158.0.0/15; set_real_ip_from 172.64.0.0/13; set_real_ip_from 173.245.48.0/20; set_real_ip_from 188.114.96.0/20; set_real_ip_from 190.93.240.0/20; set_real_ip_from 197.234.240.0/22; set_real_ip_from 198.41.128.0/17;

# Which header contains real IP: real_ip_header X-Forwarded-For; # Or for Cloudflare: # real_ip_header CF-Connecting-IP;

# Recursive search for real IP: real_ip_recursive on;

# NOW use real IP for rate limiting: limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;

# With real_ip configured, $binary_remote_addr is now real client IP }

# Test real IP detection: # Add header to see actual IP being used: add_header X-Debug-IP $remote_addr always;

# Check logs: tail -f /var/log/nginx/access.log | grep "X-Debug-IP"

# Log real IP: log_format debug '$remote_addr - $http_x_forwarded_for - $request'; access_log /var/log/nginx/debug.log debug; ```

Step 5: Configure Burst and Delay Correctly

```nginx # Rate limit options explained:

location /api/ { # Basic: 10 requests per second, no burst limit_req zone=mylimit; # 11th request gets 429

# With burst: Allow temporary spike limit_req zone=mylimit burst=20; # Allows up to 20 extra requests to queue # Requests processed at 10r/s from queue

# With nodelay: Process burst immediately limit_req zone=mylimit burst=20 nodelay; # 30 requests (10 rate + 20 burst) processed immediately # 31st request gets 429

# With delay: Allow some burst without delay limit_req zone=mylimit burst=20 delay=10; # First 10 requests over rate: processed immediately # Next 10 requests: delayed to match rate # 21st request over rate: 429

proxy_pass http://backend; }

# Example configurations by use case:

# 1. API rate limit (allow some burst): limit_req zone=api_limit burst=50 nodelay;

# 2. Login protection (strict): limit_req zone=login_limit burst=5 nodelay;

# 3. Search endpoint (moderate): limit_req zone=search_limit burst=10 delay=5;

# 4. Static files (lenient or none): # No rate limit needed

# 5. Webhook receiver (high burst for batch): limit_req zone=webhook_limit burst=100 nodelay;

# Calculate burst based on expected traffic: # burst = expected_peak_requests_per_second * acceptable_queue_time # Example: 100 peak rps, 1 second queue = burst=100 ```

Step 6: Log and Monitor Rate Limiting

```nginx # Enable rate limit logging:

http { # Set log level: limit_req_log_level warn; # info, notice, warn, error

server { location /api/ { limit_req zone=mylimit burst=10 nodelay;

# Log when rate limit applied: # Nginx automatically logs to error.log } } }

# Check logs: tail -f /var/log/nginx/error.log | grep "limiting requests"

# Output: # 2026/04/08 10:30:15 [warn] 1234#1234: *567 limiting requests, # excess: 5.3 by zone "mylimit", client: 192.168.1.100, # server: example.com, request: "GET /api/ HTTP/1.1"

# Custom log format with rate limit info: log_format ratelimit '$remote_addr - $time_local - ' '$request - $status - ' 'zone=$limit_req_status'; access_log /var/log/nginx/rate_limit.log ratelimit;

# Monitor with tools: # Real-time monitoring: tail -f /var/log/nginx/error.log | grep "limiting" | \ awk '{print $1, $2, $7, $9}' | \ sed 's/client: //' | sort | uniq -c

# Count rate limited requests per IP: grep "limiting requests" /var/log/nginx/error.log | \ grep -oP 'client: \K[^,]+' | sort | uniq -c | sort -rn

# Export metrics for Prometheus: location /nginx_status { stub_status on; # Rate limit stats not directly available # Use custom Lua or OpenResty for detailed metrics } ```

Step 7: Debug Rate Limit Configuration

```bash # Test rate limit configuration:

# 1. Verify zone is loaded: sudo nginx -T | grep -A 5 "limit_req_zone"

# 2. Check if limit_req applied: sudo nginx -T | grep -A 2 "limit_req zone"

# 3. Test with curl: for i in {1..20}; do curl -w "%{http_code}\n" -o /dev/null -s https://example.com/api/ done

# Output should show: # 200 # 200 # ... # 429 <- Rate limit kicked in

# 4. Test burst behavior: for i in {1..30}; do curl -w "%{http_code} " -o /dev/null -s https://example.com/api/ done echo

# With burst=20 nodelay: # 200 200 200 ... 200 (30 times, all succeed immediately) # Then 429 on 31st

# Without burst: # 200 200 200 ... 429 (11th request limited)

# 5. Test with Apache Bench: ab -n 100 -c 10 https://example.com/api/

# Check "Failed requests" in output

# 6. Test with different IPs: curl -H "X-Forwarded-For: 1.2.3.4" https://example.com/api/ curl -H "X-Forwarded-For: 5.6.7.8" https://example.com/api/

# 7. Debug zone memory: sudo cat /proc/$(pgrep nginx | head -1)/maps | grep -i anon ```

Step 8: Handle Edge Cases

```nginx # Exclude certain IPs from rate limiting:

http { # Map to define IPs to exclude: map $remote_addr $limit_key { # Whitelisted IPs: 192.168.1.100 ""; # Empty string = no limit 10.0.0.0/8 ""; # Internal network default $binary_remote_addr; }

# Use the map as key: limit_req_zone $limit_key zone=mylimit:10m rate=10r/s;

server { location /api/ { limit_req zone=mylimit burst=10 nodelay; proxy_pass http://backend; } } }

# Rate limit by API key:

http { # Use API key as limit key: map $http_x_api_key $limit_key { default $binary_remote_addr; "~^[a-zA-Z0-9]+$" $http_x_api_key; # Valid API key }

limit_req_zone $limit_key zone=api_limit:10m rate=100r/m;

server { location /api/ { # Authenticated users have higher limits: limit_req zone=api_limit burst=200 nodelay; proxy_pass http://backend; } } }

# Different limits for different methods:

http { map $request_method $rate_limit { default "10r/s"; GET "20r/s"; POST "5r/s"; PUT "5r/s"; DELETE "2r/s"; }

# Note: Can't use variable in limit_req_zone rate # Use separate zones instead:

limit_req_zone $binary_remote_addr zone=get_limit:10m rate=20r/s; limit_req_zone $binary_remote_addr zone=post_limit:10m rate=5r/s;

server { location /api/ { if ($request_method = GET) { set $limit_zone "get_limit"; } if ($request_method = POST) { set $limit_zone "post_limit"; }

# Note: limit_req with variable zone not supported # Use Lua/OpenResty for dynamic zones } } }

# Rate limit by authenticated user:

http { # Extract user ID from JWT or session: # Requires Lua or use header from auth proxy

# If backend sends user ID in header: limit_req_zone $http_x_user_id zone=user_limit:10m rate=100r/s;

# Or use combination: limit_req_zone "${binary_remote_addr}:${http_x_api_key}" zone=combined:10m rate=50r/s; } ```

Step 9: Use Advanced Rate Limiting with Lua

```nginx # Requires OpenResty or lua-nginx-module

http { lua_shared_dict rate_limit_store 10m;

init_by_lua_block { require "resty.core" }

server { location /api/ { access_by_lua_block { local limit_req = require "resty.limit.req"

-- 100 requests per second, burst 200 local lim, err = limit_req.new("rate_limit_store", 100, 200) if not lim then ngx.log(ngx.ERR, "failed to instantiate limit_req: ", err) return ngx.exit(500) end

-- Use client IP as key local key = ngx.var.binary_remote_addr local delay, err = lim:incoming(key, true)

if not delay then if err == "rejected" then ngx.exit(429) end ngx.log(ngx.ERR, "failed to limit: ", err) return ngx.exit(500) end

if delay >= 0.001 then -- Request delayed ngx.sleep(delay) end }

proxy_pass http://backend; } } }

# Benefits of Lua rate limiting: # - Dynamic rate limits # - Sliding window algorithm # - Per-user, per-API-key limits # - Whitelist/blacklist at runtime # - Custom algorithms ```

Step 10: Implement Production Rate Limiting Strategy

```nginx # /etc/nginx/nginx.conf - Complete configuration

http { # Define zones for different endpoints limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s; limit_req_zone $binary_remote_addr zone=api:10m rate=100r/m; limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m; limit_req_zone $binary_remote_addr zone=search:10m rate=30r/m;

# Log and status settings limit_req_log_level warn; limit_req_status 429;

# Real IP configuration set_real_ip_from 10.0.0.0/8; set_real_ip_from 172.16.0.0/12; set_real_ip_from 192.168.0.0/16; real_ip_header X-Forwarded-For; real_ip_recursive on;

# Whitelist map map $remote_addr $limit_key { 127.0.0.1 ""; 10.0.0.0/8 ""; default $binary_remote_addr; }

include /etc/nginx/sites-enabled/*; }

# /etc/nginx/sites-available/api.example.com

server { listen 80; server_name api.example.com;

# Health check - no rate limit location /health { access_log off; return 200 "OK"; }

# Login - strict rate limit location /auth/login { limit_req zone=login burst=3 nodelay;

error_page 429 = @too_many_requests; proxy_pass http://auth_backend; }

# API - moderate rate limit location /api/v1/ { limit_req zone=api burst=50 nodelay;

error_page 429 = @too_many_requests; proxy_pass http://api_backend; }

# Search - specific rate limit location /api/v1/search { limit_req zone=search burst=10 nodelay;

error_page 429 = @too_many_requests; proxy_pass http://api_backend; }

# Custom error response location @too_many_requests { default_type application/json; add_header Retry-After 60 always; return 429 '{"error":"Too Many Requests","retry_after":60}'; }

# Log format for rate limit monitoring log_format ratelimit escape=json '{' '"time":"$time_local",' '"remote_addr":"$remote_addr",' '"request":"$request",' '"status":$status,' '"limit_status":"$limit_req_status"' '}';

access_log /var/log/nginx/api_access.log ratelimit; error_log /var/log/nginx/api_error.log warn; }

# Monitoring script: cat << 'EOF' > /usr/local/bin/check_rate_limits.sh #!/bin/bash

echo "=== Rate Limit Statistics ==="

echo -e "\nTop 10 rate-limited IPs:" grep "limiting requests" /var/log/nginx/error.log | \ grep -oP 'client: \K[^,]+' | \ sort | uniq -c | sort -rn | head -10

echo -e "\nRate limit events in last hour:" grep "limiting requests" /var/log/nginx/error.log | \ grep "$(date '+%Y/%m/%d %H')" | wc -l

echo -e "\n429 responses in last hour:" grep " 429 " /var/log/nginx/api_access.log | \ grep "$(date '+%d/%b/%Y:%H')" | wc -l

echo -e "\nMemory usage:" free -h | grep -E "Mem|Swap"

echo -e "\nNginx worker processes:" ps aux | grep nginx | grep -v grep EOF

chmod +x /usr/local/bin/check_rate_limits.sh

# Add to cron for monitoring: # */5 * * * * /usr/local/bin/check_rate_limits.sh >> /var/log/rate_limit_stats.log ```

Nginx Rate Limiting Checklist

CheckCommandExpected
Module loadednginx -Vwith-http_limit_req_module
Zone definednginx -Tlimit_req_zone present
Zone appliednginx -Tlimit_req in location
Real IPcheck logsCorrect client IP
Burst confignginx -TAppropriate burst
Logs activetail error.logRate limit messages

Verify the Fix

```bash # After configuring rate limiting:

# 1. Test configuration sudo nginx -t # Output: syntax ok, test successful

# 2. Reload Nginx sudo systemctl reload nginx

# 3. Test rate limit enforcement for i in {1..15}; do curl -s -w "%{http_code}\n" -o /dev/null https://api.example.com/api/v1/test done # Output: 200 200 ... 429 (rate limit applied)

# 4. Check logs tail /var/log/nginx/error.log | grep "limiting" # Output: limiting requests, excess: X.X by zone "api"

# 5. Test burst behavior ab -n 150 -c 50 https://api.example.com/api/v1/test # Output: Some 429 responses

# 6. Verify whitelisted IPs curl -H "X-Forwarded-For: 10.0.0.1" https://api.example.com/api/v1/test # Output: 200 (whitelisted)

# 7. Test different zones # Login should have stricter limit: for i in {1..10}; do curl -s -w "%{http_code}\n" -o /dev/null -X POST https://api.example.com/auth/login done # Output: 429 after fewer requests than API

# 8. Monitor in production /usr/local/bin/check_rate_limits.sh # Output: Rate limit statistics

# Compare before/after: # Before: No rate limiting, 1000s of requests processed # After: Rate limiting enforced, 429s returned appropriately ```

  • [Fix Nginx 502 Bad Gateway](/articles/fix-502-bad-gateway)
  • [Fix Nginx 504 Gateway Timeout](/articles/fix-504-gateway-timeout)
  • [Fix Nginx Configuration Syntax Error](/articles/fix-nginx-configuration-syntax-error)