# Fix Nginx Upstream Prematurely Closed Connection Error

Your Nginx reverse proxy serves requests to an upstream backend, but some requests fail with:

bash
2026/04/08 15:42:11 [error] 7890#7890: *67890 upstream prematurely closed connection while reading response header from upstream, client: 203.0.113.77, server: app.example.com, request: "GET /api/reports/monthly HTTP/1.1", upstream: "http://127.0.0.1:8080/api/reports/monthly"

The upstream accepted the connection, started processing the request, but closed the socket before sending a complete HTTP response. Nginx returns a 502 Bad Gateway to the client.

Why Connections Close Prematurely

Unlike "Connection refused" where the backend is not running, "prematurely closed" means the backend was running and accepted the connection, then terminated it abnormally. The common causes are:

  1. 1.Backend process crash during request handling (unhandled exception, segfault)
  2. 2.Out of Memory (OOM) killer terminating the backend process
  3. 3.Backend timeout killing a long-running request
  4. 4.Backend keepalive timeout shorter than Nginx's expectation
  5. 5.Backend restart during active request

Step 1: Check Backend Logs

The first place to look is the backend application's error log. For a Node.js application:

bash
sudo journalctl -u myapp --since "10 minutes ago" --no-pager | grep -E "error|fatal|exception|uncaught"

For a Python/Gunicorn application:

bash
sudo journalctl -u gunicorn --since "10 minutes ago" --no-pager | grep -E "error|critical|timeout|worker"

Look for stack traces that correlate with the timestamp of the Nginx error.

Step 2: Check for OOM Kills

bash
sudo dmesg -T | grep -i "killed process"

If you see entries like:

bash
[Thu Apr  8 15:42:10 2026] Out of memory: Killed process 5432 (node) total-vm:2048576kB, anon-rss:1890234kB

The kernel killed your backend because it used too much memory. The connection was closed because the process no longer exists.

Fix by increasing memory limits or optimizing the application:

```bash # Check current memory usage systemctl status myapp -l

# Add swap as emergency buffer sudo fallocate -l 2G /swapfile sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile ```

Step 3: Backend Timeout Issues

Application servers have their own request timeouts. Gunicorn defaults to 30 seconds, Puma to 60 seconds, Node.js has no default but Express middleware may set one.

If the backend timeout fires before the response is ready, it closes the connection:

```bash # Gunicorn gunicorn --timeout 120 app:app

# Puma (config/puma.rb) worker_timeout 120

# Check current timeout ps aux | grep gunicorn | grep timeout ```

Step 4: Keepalive Timeout Mismatch

If Nginx reuses a keepalive connection but the backend has already timed it out:

nginx
upstream backend {
    server 127.0.0.1:8080;
    keepalive 16;
    keepalive_timeout 60s;
    keepalive_requests 100;
}

The keepalive_timeout here must match or be less than the backend's keepalive timeout. If the backend closes idle connections after 30 seconds but Nginx thinks they are alive for 60 seconds, Nginx will try to reuse a dead connection.

Step 5: Configure Graceful Retries

Add automatic retry on upstream failure:

nginx
location / {
    proxy_pass http://backend;
    proxy_next_upstream error timeout http_502 http_503;
    proxy_next_upstream_tries 2;
    proxy_next_upstream_timeout 10s;
}

This tells Nginx to automatically retry the request on the next available upstream server if the first one fails with a 502, 503, error, or timeout. The proxy_next_upstream_timeout 10s prevents infinite retry loops.

Debugging with strace

If logs do not reveal the cause, trace the backend process:

bash
sudo strace -p $(pgrep -f "node.*server.js") -e trace=network -f 2>&1 | grep -E "close|shutdown|ECONNRESET"

This shows exactly when and why the network socket is closed. A close() followed by an exit indicates a crash. A shutdown() indicates a graceful close, possibly from a timeout handler.