Introduction
V8's JSON.parse() creates the entire JavaScript object tree in memory at once. For large JSON payloads (hundreds of MB), this can exceed the default heap size (approximately 1.5 GB on 64-bit systems). The parsing process requires 2-10x the raw JSON size in memory because V8 creates objects, arrays, and strings for every element.
Symptoms
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memoryFATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory- Process crashes with exit code 134 (SIGABRT)
- Works with small JSON files but crashes on large exports
- Memory spike visible just before crash in monitoring
``` <--- Last few GCs --->
[12345:0x5e8a000] 45123 ms: Mark-sweep 1480.2 (1510.5) -> 1470.1 (1510.8) MB, 850.2 / 0.0 ms (average mu = 0.150, current mu = 0.020) allocation failure scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory 1: 0xb091e0 node::Abort() [node] 2: 0xa1b70e [node] 3: 0xce1a20 v8::Utils::ReportOOMFailure(...) [node] ```
Common Causes
- Loading entire database export as JSON
- Parsing large API responses
- Reading multi-MB configuration files
JSON.parse(fs.readFileSync('large-data.json'))- Processing webhook payloads with many nested objects
Step-by-Step Fix
- 1.Increase V8 heap size:
- 2.```bash
- 3.# Default is ~1.5GB on 64-bit, increase to 4GB
- 4.node --max-old-space-size=4096 server.js
# Or via environment variable NODE_OPTIONS="--max-old-space-size=4096" node server.js
# In package.json scripts: # "start": "node --max-old-space-size=4096 server.js" ```
- 1.Use streaming JSON parser:
- 2.```javascript
- 3.const Clarinet = require('clarinet');
- 4.const fs = require('fs');
// Stream-parse JSON without loading it all into memory function streamParseJson(filePath) { const parser = Clarinet.parser(); const items = []; let inTargetArray = false;
parser.onvalue = (value) => { if (inTargetArray) { items.push(value); // Process and discard to save memory if (items.length >= 1000) { processBatch(items.splice(0)); } } };
parser.onopenarray = () => { inTargetArray = true; }; parser.onclosearray = () => { if (items.length > 0) processBatch(items.splice(0)); };
fs.createReadStream(filePath).pipe(parser); }
function processBatch(batch) {
console.log(Processing batch of ${batch.length} items);
// Process batch and discard
}
streamParseJson('large-export.json'); ```
- 1.Use JSONStream for targeted extraction:
- 2.```javascript
- 3.const JSONStream = require('JSONStream');
- 4.const fs = require('fs');
- 5.const es = require('event-stream');
// Only extract specific paths from large JSON fs.createReadStream('large-export.json') .pipe(JSONStream.parse('results.*')) // Extract each item in results array .pipe(es.through(function (data) { // Process each item individually processItem(data); this.queue(null); // Don't accumulate }, function () { console.log('Done processing all items'); this.emit('end'); })); ```
- 1.Use ndjson (newline-delimited JSON) instead:
- 2.```javascript
- 3.const split = require('split2');
- 4.const fs = require('fs');
// Instead of one giant JSON array, use one JSON object per line // {"id":1,"name":"Alice"} // {"id":2,"name":"Bob"}
fs.createReadStream('data.ndjson') .pipe(split()) // Split by newline .on('data', (line) => { if (line.length === 0) return; const item = JSON.parse(line); // Parse one object at a time processItem(item); }) .on('end', () => console.log('Done')); ```
Prevention
- Use streaming JSON parsers for files over 100MB
- Prefer newline-delimited JSON (ndjson) for large data exports
- Set
--max-old-space-sizein production based on container memory limits - In Kubernetes, set
--max-old-space-sizeto 75% of container memory limit - Monitor heap usage with
process.memoryUsage().heapUsed - Use
node --trace-gcto monitor garbage collection behavior - Avoid
JSON.parse(fs.readFileSync())- use streams instead