Introduction
Node.js stream backpressure occurs when a readable stream produces data faster than a writable stream can consume it. Without proper backpressure handling, the writable stream's internal buffer grows until it exceeds the highWaterMark, causing memory exhaustion and ERR_STREAM_WRITE_AFTER_END or ERR_STREAM_DESTROYED errors. This is particularly common when piping large files to slow network destinations, transforming data at different speeds, or when the destination becomes temporarily unavailable. The pipe() method handles backpressure automatically, but custom stream implementations and event-based data handling often miss the backpressure signals.
Symptoms
Memory exhaustion:
<--- Last few GCs --->
[12345:0x55a1b2c3d4e0] 456789 ms: Mark-sweep 2045.6 (2089.2) -> 2034.1 (2090.2) MB
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failedOr stream errors:
``` node:internal/streams/writable:320 throw new ERR_STREAM_WRITE_AFTER_END(); ^
Error [ERR_STREAM_WRITE_AFTER_END]: write after end at writeAfterEnd (node:internal/streams/writable:320:15) ```
Or:
Error [ERR_STREAM_CANNOT_PIPE]: Cannot pipe, not readableCommon Causes
- Manual data transfer without backpressure check: Using
dataevent without checkingwrite()return value - highWaterMark too large: Default 16KB for object mode, 64KB for binary -- too small for large files
- Not handling stream errors: Error on one stream in a pipe chain does not propagate
- Writing after stream end: Code writes to a stream after
end()was called - Transform stream not calling callback: Custom Transform stream does not call
this.push()or callback - Slow destination with fast source: Reading a 10GB file and uploading to a rate-limited API
Step-by-Step Fix
Step 1: Use stream.pipeline for proper error handling
```javascript const { pipeline } = require('stream/promises'); const fs = require('fs'); const zlib = require('zlib');
async function compressFile(inputPath, outputPath) { const source = fs.createReadStream(inputPath); const gzip = zlib.createGzip(); const destination = fs.createWriteStream(outputPath);
try { await pipeline(source, gzip, destination); console.log('File compressed successfully'); } catch (err) { console.error('Pipeline failed:', err.message); // pipeline automatically cleans up (closes all streams) } }
compressFile('large-file.txt', 'large-file.txt.gz'); ```
pipeline handles backpressure automatically, propagates errors, and cleans up all streams on error.
Step 2: Handle backpressure manually with write() return value
```javascript const fs = require('fs'); const net = require('net');
function sendFile(filePath, socket) { const readStream = fs.createReadStream(filePath);
readStream.on('data', (chunk) => { // write() returns false when the buffer is full const canContinue = socket.write(chunk);
if (!canContinue) { // Backpressure detected - pause the reader readStream.pause(); } });
// Resume reading when the drain event fires socket.on('drain', () => { readStream.resume(); });
readStream.on('end', () => { socket.end(); });
readStream.on('error', (err) => { console.error('Read error:', err.message); socket.destroy(); }); } ```
Step 3: Configure highWaterMark for large transfers
```javascript const fs = require('fs');
// For large file transfers, increase the buffer const readStream = fs.createReadStream('large-file.dat', { highWaterMark: 1024 * 1024, // 1MB chunks instead of default 64KB });
const writeStream = fs.createWriteStream('output.dat', { highWaterMark: 1024 * 1024, // 1MB write buffer });
readStream.pipe(writeStream); ```
Prevention
- Always use
stream.pipeline()orstream/promises.pipelinefor stream chains - Check the return value of
stream.write()and pause/resume accordingly - Set
highWaterMarkbased on your data size and memory constraints - Handle
errorevents on ALL streams in a pipeline - Use
finished()from thestreammodule to detect stream completion - Monitor memory usage during large stream operations
- Use
readStream.pause()andreadStream.resume()for fine-grained backpressure control