# Fix Nginx proxy_buffering Off for Streaming Response Issues

Your application streams large files or server-sent events through Nginx, but clients experience long delays before receiving any data, or streaming responses appear in large chunks instead of a smooth flow. The issue is Nginx proxy buffering.

Understanding Proxy Buffering

By default, Nginx buffers responses from upstream servers. It reads the entire response from the backend, stores it in memory or on disk, and then sends it to the client. This is optimal for most use cases because it:

  • Frees the upstream connection quickly so the backend can handle more requests
  • Allows Nginx to serve slow clients without holding backend connections open
  • Enables Nginx to apply gzip compression to the full response

But for streaming responses, buffering is the enemy.

Symptoms of Buffering Problems

  • Server-sent events arrive in bursts instead of continuously
  • Large file downloads start after a long delay
  • WebSocket connections drop unexpectedly
  • Video streaming buffers excessively

Check your Nginx error log for buffering messages:

bash
2026/04/08 12:15:33 [warn] 4567#4567: *23456 an upstream response is buffered to a temporary file /var/lib/nginx/proxy/1/00/0000000001

This means the response exceeded proxy_max_temp_file_size and is being written to disk, adding significant latency.

Disabling Buffering for Streaming Endpoints

```nginx server { # Default: buffering ON for regular requests location / { proxy_pass http://backend; proxy_buffering on; }

# Streaming endpoint: buffering OFF location /api/stream/ { proxy_pass http://backend; proxy_buffering off; proxy_cache off; proxy_request_buffering off; proxy_http_version 1.1; proxy_set_header Connection ""; }

# Server-sent events: buffering OFF location /api/events { proxy_pass http://backend; proxy_buffering off; proxy_cache off; proxy_read_timeout 86400s; proxy_send_timeout 86400s; proxy_set_header Connection ''; proxy_http_version 1.1; chunked_transfer_encoding off; }

# Large file download: streaming with minimal buffering location /downloads/ { proxy_pass http://backend; proxy_buffering off; proxy_read_timeout 600s; } } ```

Buffering Configuration for WebSocket

WebSockets require specific configuration:

```nginx map $http_upgrade $connection_upgrade { default upgrade; '' close; }

upstream websocket { server 127.0.0.1:8080; }

server { location /ws { proxy_pass http://websocket; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_read_timeout 86400s; proxy_buffering off; } } ```

The proxy_buffering off ensures WebSocket frames are forwarded immediately without waiting for the buffer to fill.

Tuning Buffer Sizes Instead of Disabling

If you need some buffering but want to reduce latency, tune the buffer sizes:

nginx
location /api/ {
    proxy_pass http://backend;
    proxy_buffering on;
    proxy_buffer_size 4k;
    proxy_buffers 8 4k;
    proxy_busy_buffers_size 8k;
}

Smaller buffers mean Nginx starts sending data to the client sooner. The default proxy_buffer_size is typically 4k-8k and proxy_buffers is 8 4k-8k. Reducing these makes responses feel more responsive but increases memory usage per connection.

Testing Streaming Behavior

Use curl to observe the timing of streamed data:

bash
curl -N -s https://example.com/api/stream/events | while read line; do
    echo "$(date +%T.%N): $line"
done

The -N flag disables curl's own buffering so you see data as it arrives. Each line should be timestamped, showing whether data arrives smoothly or in bursts.

If you see gaps of several seconds between timestamps, the buffering is still active somewhere in the chain -- check the backend application, any intermediate proxies, and the Nginx configuration.