Introduction
MongoDB memory issues typically manifest as either gradual performance degradation or sudden process termination by the operating system's OOM (Out of Memory) killer. The WiredTiger storage engine manages its own cache, but misconfiguration or memory-hungry operations can exhaust available RAM, causing critical failures.
Symptoms
System and MongoDB indicators of memory pressure:
```text # MongoDB logs - cache pressure WiredTiger message: Cache stuck WiredTiger message: Cache overflow WT_CACHE_STUCK: cache overflow
# System logs - OOM kill Out of memory: Killed process 1234 (mongod) oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0-1
# mongosh memory warnings db.serverStatus().wiredTiger.cache # "bytes currently in the cache" near "maximum bytes configured"
# Connection failures MongoNetworkError: connection timeout ```
Application symptoms include slow queries, connection timeouts, and eventual service unavailability.
Common Causes
- 1.WiredTiger cache too large - Default is 50% of RAM minus 1GB, may be too much for shared servers
- 2.Large in-memory sorts - Sorting without index loads entire result set into memory
- 3.Aggregation memory spike - $group, $sort stages without proper indexing
- 4.Too many connections - Each connection consumes ~1MB stack space
- 5.Journaling pressure - High write volume filling journal buffer
- 6.Memory-mapped files - OS page cache competing with WiredTiger cache
Step-by-Step Fix
Step 1: Diagnose Memory Usage
Check current memory state:
```javascript // Connect to MongoDB mongosh
// WiredTiger cache statistics db.serverStatus().wiredTiger.cache
// Key metrics to check: // - "bytes currently in the cache" // - "maximum bytes configured" // - "pages evicted by application threads"
// Memory overview db.serverStatus().mem // Shows: resident, virtual, mapped
// Connection count db.serverStatus().connections ```
Check system memory:
```bash # Free memory free -h
# MongoDB memory map ps aux | grep mongod # Check RSS (resident set size) vs VSZ (virtual)
# Detailed memory stats cat /proc/$(pgrep mongod)/status | grep -i mem ```
Step 2: Configure WiredTiger Cache
Adjust cache size in mongod.conf:
sudo nano /etc/mongod.confstorage:
wiredTiger:
engineConfig:
cacheSizeGB: 4 # Set to ~40-50% of available RAMFormula for cache size: - Single-purpose MongoDB server: 50-60% of RAM - Shared server with other services: 25-40% of RAM - Always leave room for OS page cache and connections
Restart after configuration change:
sudo systemctl restart mongodStep 3: Fix Memory-Intensive Queries
Identify slow queries:
```javascript // Enable profiler for slow operations db.setProfilingLevel(1, { slowms: 100 })
// Check profiler data db.system.profile.find({ millis: { $gt: 1000 }, ts: { $gt: new Date(Date.now() - 3600000) } }).sort({ ts: -1 }).limit(10)
// Look for: // - "planSummary": "COLLSCAN" (collection scans) // - large "docsExamined" values // - "hasSortStage": true with large result sets ```
Find queries with large sort stages:
db.system.profile.aggregate([
{ $match: { hasSortStage: true } },
{ $project: {
command: 1,
millis: 1,
docsExamined: 1,
keysExamined: 1
}},
{ $sort: { millis: -1 } },
{ $limit: 10 }
])Add indexes for sort operations:
```javascript // If sorting by { createdAt: -1 } db.orders.createIndex({ createdAt: -1 })
// Compound index for query + sort db.orders.createIndex({ status: 1, createdAt: -1 }) ```
Step 4: Fix Aggregation Memory Issues
For large aggregations, use allowDiskUse:
db.largeCollection.aggregate([
{ $group: { _id: "$category", total: { $sum: "$amount" } } },
{ $sort: { total: -1 } }
], { allowDiskUse: true })Optimize aggregation pipeline:
```javascript // Bad: $match after $group (processes all documents) db.orders.aggregate([ { $group: { _id: "$status", count: { $sum: 1 } } }, { $match: { _id: "completed" } } ])
// Good: $match first (reduces documents early) db.orders.aggregate([ { $match: { status: "completed" } }, { $group: { _id: "$status", count: { $sum: 1 } } } ]) ```
Step 5: Reduce Connection Memory
Check and limit connections:
```javascript // Current connections db.serverStatus().connections
// Maximum connections configured db.serverStatus().metrics.commands
// Set connection limit // In mongod.conf: net: maxIncomingConnections: 1000 ```
Adjust connection pool in application:
// Node.js driver example
const client = new MongoClient(uri, {
maxPoolSize: 50, // Default is 100
minPoolSize: 10,
connectTimeoutMS: 10000
})Step 6: Prevent OOM Kills
Add swap space for safety buffer:
```bash # Check current swap swapon --show
# Add 4GB swap file (if needed) sudo fallocate -l 4G /swapfile sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile
# Make permanent echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab ```
Adjust OOM score for MongoDB:
```bash # Check current OOM score cat /proc/$(pgrep mongod)/oom_score_adj
# Reduce likelihood of OOM kill (set to -500 to -1000) echo -500 | sudo tee /proc/$(pgrep mongod)/oom_score_adj ```
Verification
Verify memory configuration:
```javascript // Check WiredTiger cache db.serverStatus().wiredTiger.cache
// Verify cache size is respected // "maximum bytes configured" should match your setting db.serverStatus().wiredTiger.cache
// Monitor cache utilization db.serverStatus().wiredTiger.cache["bytes currently in the cache"] / db.serverStatus().wiredTiger.cache["maximum bytes configured"] // Should be < 0.95 (95%) ```
System-level verification:
```bash # Check MongoDB memory usage ps aux | grep mongod
# Monitor memory over time watch -n 5 free -h
# Check OOM events dmesg | grep -i "out of memory" | tail -10 ```
Common Pitfalls
- Cache size too close to total RAM - Leaves no room for connections and OS
- Ignoring memory-mapped files - WT cache is separate from OS page cache
- Not monitoring cache eviction - High eviction rates indicate memory pressure
- Setting cacheSizeGB to 0 - Disables cache limit entirely, dangerous
Best Practices
- Monitor cache utilization with alerts at 80% and 95%
- Use
allowDiskUse: truefor large aggregations - Create covering indexes to avoid document loads
- Set appropriate connection pool sizes
- Keep MongoDB on dedicated servers when possible
- Document memory settings for capacity planning
Related Issues
- MongoDB WiredTiger Cache Full
- MongoDB Slow Query Analysis
- MongoDB Disk Full
- MongoDB Connection Pool Exhausted