What's Actually Happening
Nginx proxy fails with upstream timeout errors when backend servers don't respond within configured timeout periods. Clients receive 504 Gateway Timeout errors.
The Error You'll See
Nginx error log:
```bash $ tail /var/log/nginx/error.log
2024/01/01 00:00:00 [error] 12345#12345: *67890 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.0.0.1, server: api.example.com, request: "GET /api/users HTTP/1.1", upstream: "http://10.0.0.2:8080/api/users" ```
Client response:
HTTP/1.1 504 Gateway TimeoutLog with connect timeout:
2024/01/01 00:00:00 [error] 12345#12345: *67890 upstream timed out (110: Connection timed out) while connecting to upstreamWhy This Happens
- 1.Backend slow - Application takes too long to respond
- 2.Timeout too short - Default 60s insufficient for operation
- 3.Backend down - Service unavailable or crashing
- 4.Network latency - Slow network between Nginx and backend
- 5.Backend overload - Server cannot handle request volume
- 6.Large payloads - Slow upload/download of large files
Step 1: Check Nginx Logs
```bash # Check error log: tail -f /var/log/nginx/error.log
# Check access log: tail -f /var/log/nginx/access.log
# Filter timeout errors: grep "timed out" /var/log/nginx/error.log | tail -20
# Check upstream response times: log_format timed '$remote_addr - $request_time $upstream_response_time'; # Add to access_log directive
# Check specific upstream: grep "upstream: \"http://api:8080" /var/log/nginx/error.log | tail -10
# Check for patterns: awk '/timed out/ {print $4, $7, $11}' /var/log/nginx/error.log ```
Step 2: Check Backend Health
```bash # Test backend directly: curl -I http://10.0.0.2:8080/api/users
# Check response time: curl -w "Time: %{time_total}s\n" http://10.0.0.2:8080/api/users
# Check backend processes: ssh api-server "ps aux | grep app"
# Check backend resources: ssh api-server "top -b -n 1 | head -20"
# Check backend logs: ssh api-server "tail -100 /var/log/app/error.log"
# Check backend connectivity: ping api-server nc -zv api-server 8080 ```
Step 3: Check Current Timeout Settings
```bash # Check Nginx configuration: cat /etc/nginx/nginx.conf | grep -i timeout
# Check site-specific config: cat /etc/nginx/sites-enabled/api.conf | grep -i timeout
# Common timeout directives: # proxy_connect_timeout Time to connect to upstream # proxy_send_timeout Time to send request to upstream # proxy_read_timeout Time to read response from upstream # send_timeout Time to send response to client
# Check full configuration: nginx -T | grep -A 20 "location /api"
# Default values: # proxy_connect_timeout 60s # proxy_send_timeout 60s # proxy_read_timeout 60s ```
Step 4: Increase Timeout Settings
```nginx # In Nginx configuration: http { # Global timeout settings proxy_connect_timeout 300s; proxy_send_timeout 300s; proxy_read_timeout 300s; send_timeout 300s; }
# Or per location: location /api { proxy_pass http://api-backend;
# Connection timeout proxy_connect_timeout 60s;
# Send timeout (upload) proxy_send_timeout 300s;
# Read timeout (download/response) proxy_read_timeout 300s;
# For long-polling or SSE: proxy_read_timeout 3600s; }
# Apply changes: nginx -t && nginx -s reload ```
Step 5: Configure Upstream Keepalive
```nginx # Upstream with keepalive: upstream api-backend { server 10.0.0.2:8080; server 10.0.0.3:8080;
# Keepalive connections keepalive 32;
# Connection timeout keepalive_timeout 60s; }
location /api { proxy_pass http://api-backend; proxy_http_version 1.1;
# Clear connection header for keepalive proxy_set_header Connection "";
# Timeouts proxy_connect_timeout 60s; proxy_send_timeout 300s; proxy_read_timeout 300s; }
# Keepalive reduces connection overhead # Helps with connection timeouts under load ```
Step 6: Configure Buffering
```nginx # Buffering configuration: location /api { proxy_pass http://api-backend;
# Enable buffering (default on) proxy_buffering on;
# Buffer size for first part of response proxy_buffer_size 4k;
# Number and size of buffers proxy_buffers 8 16k;
# Max buffer size for large responses proxy_busy_buffers_size 32k;
# Temp file path for large responses proxy_temp_path /var/cache/nginx/proxy_temp;
# Size threshold for writing to temp file proxy_max_temp_file_size 1024m;
# Disable buffering for streaming: # proxy_buffering off; }
# For large file uploads: client_max_body_size 100m; client_body_buffer_size 128k; ```
Step 7: Add Health Checks
```nginx # Active health checks (Nginx Plus): upstream api-backend { server 10.0.0.2:8080; server 10.0.0.3:8080;
health_check interval=5s fails=3 passes=2; }
# Passive health checks (Open source Nginx): upstream api-backend { server 10.0.0.2:8080 max_fails=3 fail_timeout=30s; server 10.0.0.3:8080 max_fails=3 fail_timeout=30s; }
# After fail_timeout, server is marked unavailable # max_fails sets number of failed attempts
# Check upstream status: curl http://localhost/nginx_status ```
Step 8: Handle Slow Backends
```nginx # For known slow endpoints: location /api/export { proxy_pass http://api-backend;
# Extended timeout for export proxy_read_timeout 600s;
# Larger buffers for large responses proxy_buffers 16 64k; proxy_buffer_size 128k;
# Disable buffering for streaming proxy_buffering off;
# Retry on timeout proxy_next_upstream error timeout; proxy_next_upstream_tries 2; }
# For file uploads: location /api/upload { proxy_pass http://api-backend;
# Extended send timeout proxy_send_timeout 600s;
# Large body size client_max_body_size 500m;
# Temporary file for large uploads client_body_temp_path /var/cache/nginx/upload_temp; } ```
Step 9: Monitor Upstream Performance
```nginx # Enable status page: server { listen 8080;
location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } }
# Check status: curl http://localhost:8080/nginx_status
# Output: # Active connections: 10 # server accepts handled requests # 1000 1000 5000 # Reading: 0 Writing: 5 Waiting: 5
# Monitor upstream response time in logs: log_format main '$remote_addr - $request_time $upstream_response_time "$request"'; access_log /var/log/nginx/access.log main;
# Parse for slow requests: awk '$7 > 5' /var/log/nginx/access.log ```
Step 10: Configure Logging and Alerts
```bash # Create monitoring script: cat << 'EOF' > /usr/local/bin/monitor-nginx.sh #!/bin/bash
echo "=== Nginx Status ===" curl -s http://localhost:8080/nginx_status
echo "" echo "=== Recent Timeouts ===" grep "timed out" /var/log/nginx/error.log | tail -10
echo "" echo "=== Upstream Response Times ===" tail -100 /var/log/nginx/access.log | awk '{print $7}' | sort -n | tail -10
echo "" echo "=== Backend Connectivity ===" for backend in 10.0.0.2:8080 10.0.0.3:8080; do echo -n "$backend: " nc -zv -w 2 $backend 2>&1 | grep -o "succeeded|refused" done
echo "" echo "=== Nginx Processes ===" ps aux | grep nginx | grep -v grep EOF
chmod +x /usr/local/bin/monitor-nginx.sh
# Prometheus metrics with nginx-prometheus-exporter: curl http://localhost:9113/metrics | grep nginx
# Key metrics: # nginx_http_requests_total # nginx_upstream_response_time_seconds # nginx_connections_active
# Alert for upstream timeouts: - alert: NginxUpstreamTimeouts expr: rate(nginx_upstream_timeouts_total[5m]) > 0 for: 2m labels: severity: warning annotations: summary: "Nginx upstream timeouts detected" ```
Nginx Upstream Timeout Checklist
| Check | Command | Expected |
|---|---|---|
| Error logs | grep timed out | Identify pattern |
| Backend health | curl backend | Responding |
| Timeout config | nginx -T | Adequate values |
| Upstream status | nginx_status | Servers up |
| Response time | access log | Within timeout |
| Backend resources | top | Available |
Verify the Fix
```bash # After adjusting timeout settings
# 1. Reload Nginx nginx -t && nginx -s reload // Configuration valid
# 2. Test slow endpoint curl -w "Time: %{time_total}s\n" http://api.example.com/api/export // Completes within timeout
# 3. Check error log tail /var/log/nginx/error.log | grep timeout // No new timeout errors
# 4. Monitor access log tail -f /var/log/nginx/access.log | grep -v 200 // Check for errors
# 5. Test under load ab -n 1000 -c 100 http://api.example.com/api/users // All requests complete
# 6. Verify upstream health curl http://localhost:8080/nginx_status // Active connections healthy ```
Related Issues
- [Fix HAProxy Backend 503](/articles/fix-haproxy-backend-503)
- [Fix Traefik Routing Not Working](/articles/fix-traefik-routing-not-working)
- [Fix Caddy HTTPS Certificate Not Obtained](/articles/fix-caddy-https-certificate-not-obtained)