Introduction
MongoDB implements a built-in disk space management threshold at 90% capacity. When the data volume reaches this watermark, MongoDB enters a read-only mode and stops accepting write operations to prevent complete disk exhaustion. This protection mechanism can cause production outages if not addressed promptly, requiring immediate remediation to restore write functionality.
Symptoms
When disk space reaches critical levels, MongoDB exhibits these behaviors:
```text # Write rejection error MongoError: Out of disk space MongoServerError: Can't take a write operation while balancing WriteError: No space left on device
# In mongod.log {"t":{"$date":"2026-04-03T10:00:00Z"},"s":"E","c":"STORAGE","id":22423,"msg":"WiredTiger error","attr":{"error":"No space left on device"}}
# Replica set members may become stale {"msg":"Replication coordinator shutting down","attr":{"reason":"disk full"}}
# Ops halted db.collection.insertOne({...}) // MongoWriteConcernError: waiting for replication timed out ```
System-level indicators:
df -h /var/lib/mongodb
# Filesystem shows 90%+ usageCommon Causes
- 1.Data volume growth - Natural growth exceeding disk capacity planning
- 2.Large oplog - Replica set oplog consuming excessive space
- 3.Abandoned indexes - Unused or duplicate indexes taking space
- 4.Uncapped collections - Collections without size limits growing unbounded
- 5.Journal files - Journal not being cleaned up properly
- 6.Old backup files - Leftover backups in data directory
- 7.GridFS chunks - Large file storage in GridFS
Step-by-Step Fix
Step 1: Assess Disk Usage
Check system and MongoDB disk usage:
```bash # System disk usage df -h
# MongoDB data directory du -sh /var/lib/mongodb/*
# Find large files find /var/lib/mongodb -type f -size +100M -exec ls -lh {} \; ```
Within MongoDB, check database and collection sizes:
```javascript mongosh
// Database sizes db.adminCommand("listDatabases").databases.forEach(function(db) { print(db.name + ": " + (db.sizeOnDisk / 1024 / 1024 / 1024).toFixed(2) + " GB") })
// Collection sizes db.getCollectionNames().forEach(function(c) { let stats = db.getCollection(c).stats() print(c + ": " + (stats.size / 1024 / 1024).toFixed(2) + " MB (" + (stats.storageSize / 1024 / 1024).toFixed(2) + " MB storage)") })
// Index sizes db.getCollectionNames().forEach(function(c) { let indexes = db.getCollection(c).getIndexes() indexes.forEach(function(idx) { let stats = db.getCollection(c).stats() print(c + "." + idx.name) }) }) ```
Step 2: Check Oplog Size
On replica sets, oplog can be a major space consumer:
```javascript // Check oplog size use local db.oplog.rs.stats()
// Oplog window (time span covered) db.oplog.rs.find().sort({ts: 1}).limit(1) // Oldest entry db.oplog.rs.find().sort({ts: -1}).limit(1) // Newest entry ```
Reduce oplog size if unnecessarily large:
```bash # Stop secondary sudo systemctl stop mongod
# Start in standalone mode (without replica set) mongod --port 27017 --dbpath /var/lib/mongodb --bind_ip localhost
# In new terminal mongosh --port 27017
# Drop and recreate oplog with smaller size use local db.oplog.rs.drop() db.createCollection("oplog.rs", { capped: true, size: 1024 * 1024 * 1024 }) // 1GB
# Restart normally ```
Or use the proper method (MongoDB 4.4+):
// Resize oplog while running
db.adminCommand({ replSetResizeOplog: 1, size: 1024 }) // size in MBStep 3: Remove Unused Indexes
Identify and remove unused indexes:
```javascript // Find unused indexes (requires profiling enabled) db.getCollectionNames().forEach(function(c) { let stats = db.getCollection(c).aggregate([ { $indexStats: {} } ]).toArray() stats.forEach(function(s) { if (s.accesses.ops === 0 && s.name !== "_id_") { print("Unused index: " + c + "." + s.name) } }) })
// Drop unused index db.collection.dropIndex("indexName")
// Find duplicate indexes (same keys, different names) db.collection.getIndexes().forEach(function(idx) { printjson(idx.key) }) ```
Step 4: Clean Up Data
Remove old or unnecessary data:
```javascript // Delete documents older than retention period db.logs.deleteMany({ createdAt: { $lt: new Date(Date.now() - 90 * 24 * 60 * 60 * 1000) } })
// Drop unused collections db.unused_collection.drop()
// Compact collections to reclaim space db.runCommand({ compact: "largeCollection" })
// For major space reclamation, use repairDatabase (requires 2x space) // WARNING: This locks the database db.runCommand({ repairDatabase: 1 }) ```
Step 5: Manage Capped Collections
Convert growing collections to capped:
```javascript // Create new capped collection db.createCollection("logs", { capped: true, size: 1073741824, // 1GB in bytes max: 100000 // Max 100,000 documents })
// Convert existing (requires migration) // 1. Create new capped collection // 2. Copy data // 3. Drop old and rename ```
Step 6: Free Disk Space
System-level cleanup:
```bash # Remove old log files sudo find /var/log/mongodb -name "*.log.*" -mtime +30 -delete
# Compress or move old logs sudo gzip /var/log/mongodb/mongod.log.old
# Clean package cache (if applicable) sudo apt-get clean sudo yum clean all
# Find and remove core dumps sudo find /var/lib/mongodb -name "core.*" -delete
# Check for orphaned WiredTiger files ls -la /var/lib/mongodb/WiredTiger* ```
Step 7: Expand Storage (If Needed)
If cleanup is insufficient:
```bash # Resize LVM volume (if using LVM) sudo lvextend -L +50G /dev/mapper/vg-mongodb sudo resize2fs /dev/mapper/vg-mongodb
# Or add new data directory # Update mongod.conf: # storage: # dbPath: /mnt/new-storage/mongodb ```
Verification
After remediation, verify operations restored:
```javascript // Test write db.test.insertOne({ timestamp: new Date() })
// Check disk percentage threshold db.serverStatus().metrics.disk
// Verify free space db.adminCommand({ getCmdLineOpts: 1 }).parsed.storage
// Monitor data growth db.collection.stats({ scale: 1024*1024 }) // In MB ```
System verification:
```bash # Check available space df -h /var/lib/mongodb
# Verify MongoDB accepts writes mongosh --eval "db.test.insertOne({x:1}); print('Write successful')"
# Check logs for recovery sudo tail -f /var/log/mongodb/mongod.log | grep -i "disk" ```
Common Pitfalls
- Running compact during peak hours - Compact locks collections
- Deleting data without compacting - Space may not be reclaimed immediately
- Oplog on same volume - Oplog growth can surprise capacity planning
- Ignoring disk warnings - 80% threshold warnings should trigger action
- Not monitoring growth rate - Sudden growth spikes need immediate attention
Best Practices
- Set up disk monitoring alerts at 70% and 85% thresholds
- Configure TTL indexes for time-series data
- Use capped collections for logs and metrics
- Schedule regular index reviews to remove unused indexes
- Implement data archival strategy for old records
- Separate journal on different disk for performance and safety
- Plan for 20-30% free space buffer above peak usage
Related Issues
- MongoDB WiredTiger Cache Full
- MongoDB Memory Limit Exceeded
- MongoDB Slow Query Analysis
- MongoDB Backup Restore Error