What's Actually Happening
MongoDB queries run slowly when they scan entire collections without indexes, return large result sets, or have inefficient query patterns. This degrades application performance.
The Error You'll See
Slow query log:
// mongod.log
{
"t": {"$date": "2026-04-16T00:27:00Z"},
"s": "I",
"c": "COMMAND",
"msg": "Slow query",
"attr": {
"type": "command",
"ns": "mydb.orders",
"command": {"find": "orders", "filter": {"customer_id": 100}},
"durationMillis": 15000
}
}Profiler output:
```javascript db.system.profile.find({millis: {$gt: 1000}}).sort({ts: -1}).limit(5)
{ "op": "query", "ns": "mydb.orders", "query": {"customer_id": 100}, "millis": 15000, "docsExamined": 1000000, "nreturned": 5 } ```
Why This Happens
- 1.Missing index - Collection scan instead of index scan
- 2.Uncovered query - Fetching documents instead of index-only
- 3.Large documents - Transferring too much data
- 4.Inefficient regex - Leading wildcard patterns
- 5.Array queries - Large arrays without multikey index
- 6.Wrong shard key - Cross-shard queries
Step 1: Enable Profiling
```javascript // Enable profiler for slow queries db.setProfilingLevel(1, 100) // Log queries > 100ms
// Profile all operations (careful - impacts performance) db.setProfilingLevel(2)
// Check profiling level db.getProfilingStatus()
// View slow queries db.system.profile.find().sort({ts: -1}).limit(10)
// Find slowest queries db.system.profile.aggregate([ {$match: {millis: {$gt: 1000}}}, {$group: { _id: "$ns", count: {$sum: 1}, avgTime: {$avg: "$millis"}, maxTime: {$max: "$millis"} }}, {$sort: {avgTime: -1}} ])
// Clear profile collection db.system.profile.drop() ```
Step 2: Use Explain to Analyze Query
```javascript // Get query execution plan db.orders.find({customer_id: 100}).explain("executionStats")
// Key fields to check: // - stage: COLLSCAN (slow) vs IXSCAN (fast) // - totalDocsExamined: Should be close to nReturned // - executionTimeMillis: Query time // - indexUsed: Which index was used
// Check winning plan db.orders.find({customer_id: 100}).explain("queryPlanner")
// Full execution stats db.orders.find({customer_id: 100}).explain("executionStats")
// All plans evaluated db.orders.find({customer_id: 100}).explain("allPlansExecution")
// Example output: { "queryPlanner": { "winningPlan": { "stage": "COLLSCAN", // Bad - full collection scan "filter": {"customer_id": {"$eq": 100}} } }, "executionStats": { "totalDocsExamined": 1000000, // Examined 1M docs "nReturned": 5, // Returned 5 docs "executionTimeMillis": 15000 // 15 seconds } } ```
Step 3: Create Indexes
```javascript // Single field index db.orders.createIndex({customer_id: 1})
// Compound index for multiple fields db.orders.createIndex({customer_id: 1, status: 1})
// Order matters: equality first, sort/range after // For query: {customer_id: 100, status: "pending", total: {$gt: 100}} db.orders.createIndex({customer_id: 1, status: 1, total: 1})
// Text index for text search db.products.createIndex({description: "text"})
// Geospatial index db.stores.createIndex({location: "2dsphere"})
// TTL index for auto-expiration db.sessions.createIndex({createdAt: 1}, {expireAfterSeconds: 3600})
// Partial index for common filter db.orders.createIndex( {customer_id: 1}, {partialFilterExpression: {status: "active"}} )
// Check indexes db.orders.getIndexes()
// Drop unused index db.orders.dropIndex("index_name") ```
Step 4: Optimize Index Usage
```javascript // Covered query - index only, no document fetch // Create index including all queried fields db.orders.createIndex({customer_id: 1, status: 1, total: 1})
// Query only indexed fields db.orders.find( {customer_id: 100, status: "pending"}, {_id: 0, customer_id: 1, status: 1, total: 1} // Projection )
// Explain shows: stage: "PROJECTION_COVERED"
// Index intersection (multiple indexes used) db.orders.find({customer_id: 100, product_id: 200}) // If indexes on both fields, MongoDB can intersect
// Avoid these patterns that can't use indexes: // 1. $ne, $not operators db.orders.find({status: {$ne: "cancelled"}}) // Won't use index well
// 2. $or without indexed fields db.orders.find({$or: [{a: 1}, {b: 2}]}) // Better: Create compound index or separate queries
// 3. Leading wildcard regex db.users.find({email: /@example.com$/}) // Cannot use index db.users.find({email: /^john/}) // Can use index
// 4. $nin (not in) db.orders.find({status: {$nin: ["cancelled", "pending"]}}) ```
Step 5: Optimize Document Structure
```javascript // WRONG: Large embedded arrays { _id: 1, name: "Order", items: [ // 1000+ items embedded ] }
// BETTER: Separate collection for large arrays // orders collection: { _id: 1, name: "Order", itemCount: 1000 }
// order_items collection: { order_id: 1, item: "Product A", price: 100 }
// Create index on order_id db.order_items.createIndex({order_id: 1})
// WRONG: Very large documents { _id: 1, data: "....." // 10MB string }
// BETTER: Store large data in GridFS // Or separate collection with references
// Use projection to limit returned fields db.orders.find( {customer_id: 100}, {customer_id: 1, status: 1, total: 1} // Only return these )
// Limit result size db.orders.find({customer_id: 100}).limit(100) ```
Step 6: Use Aggregation Efficiently
```javascript // WRONG: Large $lookup without index db.orders.aggregate([ {$lookup: { from: "customers", localField: "customer_id", foreignField: "_id", as: "customer" }} ])
// BETTER: Index both sides db.orders.createIndex({customer_id: 1}) db.customers.createIndex({_id: 1})
// Use $match early to reduce documents db.orders.aggregate([ {$match: {status: "pending"}}, // Filter first {$group: {_id: "$customer_id", total: {$sum: "$amount"}}} ])
// Use $project to reduce fields db.orders.aggregate([ {$match: {customer_id: 100}}, {$project: {customer_id: 1, total: 1}}, // Only needed fields {$group: {_id: null, sum: {$sum: "$total"}}} ])
// Allow disk use for large aggregations db.orders.aggregate([...], {allowDiskUse: true})
// Optimize $unwind db.orders.aggregate([ {$match: {customer_id: 100}}, // Filter before unwind {$unwind: "$items"} ]) ```
Step 7: Monitor Index Usage
```javascript // Check index usage statistics db.orders.aggregate([ {$indexStats: {}} ])
// Output shows: { "name": "customer_id_1", "accesses": { "ops": 1000, // Number of operations using index "since": ISODate("...") } }
// Find unused indexes (low ops) db.orders.aggregate([ {$indexStats: {}}, {$match: {"accesses.ops": {$lt: 10}}} ])
// Drop unused indexes db.orders.dropIndex("unused_index")
// Check collection stats db.orders.stats()
// Check index size db.orders.stats().indexSizes ```
Step 8: Configure Read Concern and Write Concern
```javascript // For read-heavy operations, use appropriate read concern db.orders.find({customer_id: 100}).readConcern("local") // Default db.orders.find({customer_id: 100}).readConcern("available") // Fastest, may be stale db.orders.find({customer_id: 100}).readConcern("majority") // Consistent
// Use read preference for replica sets db.orders.find({customer_id: 100}).readPref("secondary") // Read from secondary
// Batch operations for writes db.orders.insertMany([...]) // Faster than multiple insertOne
// Bulk write operations db.orders.bulkWrite([ {insertOne: {document: {...}}}, {updateOne: {filter: {...}, update: {...}}}, {deleteOne: {filter: {...}}} ], {ordered: false}) // Unordered for speed ```
Step 9: Shard Large Collections
```javascript // For collections > 1TB, consider sharding
// Enable sharding on database sh.enableSharding("mydb")
// Shard collection sh.shardCollection("mydb.orders", {customer_id: 1})
// Choose shard key carefully: // - High cardinality (many unique values) // - Even distribution // - Match common query patterns
// Check shard distribution db.orders.getShardDistribution()
// Monitor chunk distribution sh.status()
// If queries not targeting shards, wrong shard key db.orders.find({customer_id: 100}).explain() // Look for "SHARDING_FILTER" stage ```
Step 10: Set Up Monitoring
```javascript // Get current operations db.currentOp({"secs_running": {$gt: 5}})
// Kill long-running operation db.killOp(12345)
// Monitor with dbStats db.stats()
// Monitor with serverStatus db.serverStatus().metrics.commands
// Create monitoring script // /usr/local/bin/check_mongo_slow.js db.system.profile.find({ ts: {$gt: new Date(Date.now() - 3600000)}, millis: {$gt: 1000} }).forEach(printjson)
// Run: mongosh --quiet < check_mongo_slow.js ```
MongoDB Query Optimization Reference
| Issue | Symptom | Solution |
|---|---|---|
| COLLSCAN | No index used | Create index |
| docsExamined >> nreturned | Over-scanning | Create better index |
| Slow $lookup | No join index | Index join fields |
| Large documents | High transfer | Use projection |
| Unbounded array | Array growth | Separate collection |
Verify the Fix
```javascript // After creating index or optimizing query
// 1. Check execution plan db.orders.find({customer_id: 100}).explain("executionStats")
// Should show: // - stage: IXSCAN (not COLLSCAN) // - totalDocsExamined ≈ nreturned // - executionTimeMillis < 100
// 2. Check index usage db.orders.aggregate([{$indexStats: {}}])
// 3. Test query performance db.orders.find({customer_id: 100}).pretty()
// 4. Monitor slow query log // No new entries for the query
// 5. Check application response // Queries should return quickly
// 6. Verify index size acceptable db.orders.stats().indexSizes ```
Related Issues
- [Fix MongoDB Connection Timeout](/articles/fix-mongodb-connection-timeout)
- [Fix MongoDB Index Build Slow](/articles/fix-mongodb-index-build-slow)
- [Fix MongoDB High Memory Usage](/articles/fix-mongodb-high-memory-usage)