Your API endpoint usually responds in 200 milliseconds, but occasionally it needs 30 seconds to process a large report. Users see "504 Gateway Timeout" instead of their results. The frustrating part? The backend actually finishes processing—the response just never makes it back through Nginx.

Let's diagnose why Nginx is cutting off the connection and how to fix it.

The 504 Gateway Timeout Error Explained

A 504 Gateway Timeout means Nginx successfully forwarded your request to the backend, but the backend didn't respond within the allowed time. Unlike a 502 (connection refused), this confirms the backend received the request—it just took too long.

The error appears in your browser as:

bash
504 Gateway Time-out
nginx/1.24.0

And in your error log (/var/log/nginx/error.log):

bash
2026/04/04 10:30:45 [error] 1234#1234: *5678 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.100, server: api.example.com, request: "POST /reports/generate HTTP/1.1", upstream: "http://127.0.0.1:3000/reports/generate"

Step 1: Identify Where the Timeout Occurs

Nginx has multiple timeout settings. First, check which one is being hit:

bash
# Watch the error log while reproducing the issue
tail -f /var/log/nginx/error.log | grep -i timeout

The error message tells you exactly where:

  • while reading response header from upstream - Backend started responding but didn't finish sending headers
  • while connecting to upstream - Can't even establish connection (likely 502, not 504)
  • while reading upstream - Backend sent headers but body is slow

Step 2: Check Current Timeout Configuration

View your current Nginx timeout settings:

```bash # Check main configuration grep -r "timeout" /etc/nginx/

# Or view the full server block cat /etc/nginx/sites-available/api.example.com ```

Default timeout values are often too short for long-running operations:

nginx
location / {
    proxy_pass http://backend;
    # These are implicit defaults:
    # proxy_connect_timeout 60s;
    # proxy_send_timeout 60s;
    # proxy_read_timeout 60s;
}

Step 3: Increase Timeout Values

For endpoints that legitimately need more time, increase the timeouts:

```nginx server { listen 80; server_name api.example.com;

# Global timeout increase (use carefully) proxy_connect_timeout 300s; proxy_send_timeout 300s; proxy_read_timeout 300s;

location / { proxy_pass http://backend; }

# Specific timeout for long-running endpoints location /reports/ { proxy_pass http://backend; proxy_connect_timeout 300s; proxy_send_timeout 300s; proxy_read_timeout 300s;

# Disable buffering for streaming responses proxy_buffering off; } } ```

What each timeout controls:

DirectiveWhat It Controls
proxy_connect_timeoutTime to establish TCP connection to backend
proxy_send_timeoutTime between writes to backend (total, not per write)
proxy_read_timeoutTime between reads from backend (waiting for response)

Apply the changes:

bash
nginx -t && systemctl reload nginx

Step 4: Fix the FastCGI Timeout (PHP Applications)

If you're using PHP-FPM, the timeout is configured differently:

nginx
location ~ \.php$ {
    fastcgi_pass unix:/run/php/php8.2-fpm.sock;
    fastcgi_read_timeout 300s;  # Default is 60s
    fastcgi_send_timeout 300s;
}

Also check PHP-FPM's own timeout:

Edit /etc/php/8.2/fpm/pool.d/www.conf:

```ini ; Maximum execution time for PHP scripts php_admin_value[max_execution_time] = 300

; Request termination timeout request_terminate_timeout = 300 ```

Restart PHP-FPM:

bash
systemctl restart php8.2-fpm

Step 5: Check Upstream Keepalive Timeouts

Sometimes the backend closes idle connections that Nginx tries to reuse:

```nginx upstream backend { server 127.0.0.1:3000; keepalive 32; keepalive_timeout 60s; # Match your backend's timeout }

server { location / { proxy_pass http://backend; proxy_http_version 1.1; proxy_set_header Connection ""; } } ```

Check your backend's keepalive timeout and match it in Nginx:

```bash # For Node.js apps, check the server config # For Express: grep -r "keepAliveTimeout" .

# For Python Gunicorn gunicorn --timeout 300 --keep-alive 60 app:app ```

Step 6: Handle Slow SSL Handshakes

If your backend uses HTTPS, SSL negotiation can add significant latency:

```nginx location / { proxy_pass https://backend-secure; proxy_ssl_server_name on; proxy_ssl_verify off; # Only for internal backends

# SSL session caching proxy_ssl_session_reuse on; } ```

Or better yet, use HTTP between Nginx and backend (they're on the same network):

```nginx # Backend config (Node.js example) # app.js app.listen(3000, '127.0.0.1'); # Only local access

# Nginx config location / { proxy_pass http://127.0.0.1:3000; # HTTP, not HTTPS } ```

Step 7: Offload Long Operations to Background Jobs

Sometimes increasing timeouts is a band-aid. If you have endpoints taking 30+ seconds:

bash
# Check which endpoints are slow
grep "upstream timed out" /var/log/nginx/error.log | \
    awk -F'"' '{print $2}' | sort | uniq -c | sort -rn

For consistently slow operations, consider:

  1. 1.Return immediately, process asynchronously:

```nginx # Client gets 202 Accepted immediately location /reports/generate { proxy_pass http://backend; proxy_read_timeout 10s; # Backend should accept quickly }

# Client polls for completion location /reports/status { proxy_pass http://backend; } ```

  1. 1.Stream the response:
nginx
location /export {
    proxy_pass http://backend;
    proxy_buffering off;  # Stream chunks as they arrive
    proxy_read_timeout 600s;
}

Step 8: Monitor Backend Response Times

Find patterns in slow responses:

```bash # Add to Nginx log format log_format timed '$remote_addr - $request_time $upstream_response_time ' '$status $request';

# Use in server block access_log /var/log/nginx/timed.log timed;

# Then analyze awk '{print $4}' /var/log/nginx/timed.log | sort -n | tail -20 ```

The $upstream_response_time shows how long the backend took. If it's consistently high, your backend needs optimization, not just longer timeouts.

Verification Checklist

After making changes:

```bash # Test configuration nginx -t

# Reload Nginx systemctl reload nginx

# Test the slow endpoint time curl -X POST https://api.example.com/reports/generate

# Watch for timeout errors tail -f /var/log/nginx/error.log | grep -i timeout

# Check backend is actually finishing # (monitor backend logs during request) ```

Quick Reference: Timeout Values by Use Case

Use CaseRecommended Timeout
Standard API30-60 seconds
File upload120-300 seconds
Report generation300-600 seconds
Database export600+ seconds
SSE/WebSocketDisable or very high

Remember: Long timeouts tie up worker connections. For truly long operations, use background jobs with status polling instead of synchronous requests.