Introduction
Slow queries are the most common performance issue in MongoDB deployments. They occur when queries must scan large portions of data instead of using efficient indexes, when aggregation pipelines process too much data, or when schema design doesn't match query patterns. Identifying and resolving slow queries requires systematic analysis of query execution plans.
Symptoms
Slow queries manifest with measurable symptoms:
```text # Application timeouts Query exceeded timeout limit MongoTimeoutError: operation timed out
# High resource usage CPU spikes during query execution Disk I/O saturation
# In profiler millis: 5000+ (5 seconds) docsExamined: 1000000 (high scan) planSummary: COLLSCAN (no index)
# In logs {"msg":"Slow query","attr":{"durationMillis":5000,"ns":"mydb.users","command":{"find":{...}}}} ```
Users experience delayed page loads, report generation timeouts, or complete application stalls during peak query times.
Common Causes
- 1.Missing indexes - Queries scan entire collections
- 2.Partial index usage - Index used but not optimally
- 3.Large collection scans - COLLSCAN instead of IXSCAN
- 4.Inefficient aggregation pipelines - Wrong stage order or missing indexes
- 5.Unbounded regex queries - Leading wildcard regex patterns
- 6.Large result sets - Returning too many documents
- 7.Memory-consuming sort - In-memory sort exceeding 100MB limit
- 8.Improper field types - Comparing different BSON types
Step-by-Step Fix
Step 1: Enable and Use Query Profiler
Configure profiler to capture slow queries:
```javascript mongosh
// Enable profiler for all slow queries (> 100ms) db.setProfilingLevel(1, { slowms: 100 })
// Or profile all operations (caution: performance impact) db.setProfilingLevel(2)
// Check profiling level db.getProfilingLevel() db.getProfilingStatus() ```
Analyze profiler data:
```javascript // Recent slow queries db.system.profile.find({ millis: { $gt: 500 }, ts: { $gt: new Date(Date.now() - 3600000) } }).sort({ millis: -1 }).limit(10)
// Detailed analysis of a slow query db.system.profile.findOne({ ts: new Date("...") }) ```
Step 2: Analyze Query Execution Plan
Use explain() to understand query execution:
```javascript // Find slow query in profiler let slowQuery = db.system.profile.findOne({ millis: { $gt: 1000 } })
// Run explain on similar query db.collection.explain("executionStats").find({ status: "active", createdAt: { $gt: new Date("2024-01-01") } })
// Key fields in output: // - winningPlan: IXSCAN (good) vs COLLSCAN (bad) // - totalDocsExamined: should be close to nReturned // - executionTimeMillis: total time // - indexBounds: how index was used ```
Explain output interpretation:
```javascript // Good: Index used, minimal scan { winningPlan: { stage: "IXSCAN", ... }, executionStats: { totalDocsExamined: 100, nReturned: 100, executionTimeMillis: 10 } }
// Bad: Collection scan, high examined { winningPlan: { stage: "COLLSCAN", filter: {...} }, executionStats: { totalDocsExamined: 1000000, nReturned: 50, executionTimeMillis: 5000 } } ```
Step 3: Create Missing Indexes
Based on explain output:
```javascript // For query: { status: "active", createdAt: { $gt: ... } } // If COLLSCAN, create index:
db.collection.createIndex({ status: 1, createdAt: -1 })
// For sort queries, include sort field db.collection.createIndex({ status: 1, createdAt: -1, name: 1 })
// Re-run explain to verify db.collection.explain("executionStats").find({ status: "active", createdAt: { $gt: new Date("2024-01-01") } }) ```
Index design principles:
```javascript // ESR rule: Equality, Sort, Range // 1. Equality fields first // 2. Sort fields second // 3. Range fields last
// Query: { status: "active", name: { $gt: "A" } } // Sort: { createdAt: -1 }
// Optimal index: db.collection.createIndex({ status: 1, // Equality createdAt: -1, // Sort name: 1 // Range }) ```
Step 4: Fix Inefficient Aggregation Pipelines
Analyze slow aggregations:
```javascript // Find slow aggregations in profiler db.system.profile.find({ "command.aggregate": { $exists: true }, millis: { $gt: 1000 } }).sort({ millis: -1 })
// Get pipeline let pipeline = slowQuery.command.pipeline
// Run explain on aggregation db.collection.explain("executionStats").aggregate(pipeline) ```
Optimize pipeline stages:
```javascript // Bad: Filter late in pipeline db.users.aggregate([ { $lookup: { from: "orders", ... } }, { $group: { _id: "$userId", total: { $sum: "$orders.amount" } } }, { $match: { status: "active" } } ])
// Good: Filter early to reduce processed documents db.users.aggregate([ { $match: { status: "active" } }, // Early filter { $lookup: { from: "orders", ... } }, { $group: { _id: "$userId", total: { $sum: "$orders.amount" } } } ])
// Use indexes in aggregation db.users.aggregate([ { $match: { status: "active" } }, // Uses index { $sort: { createdAt: -1 } } // Uses index if match followed ]) ```
Enable disk use for large aggregations:
db.largeCollection.aggregate([
{ $group: { ... } },
{ $sort: { ... } }
], { allowDiskUse: true })Step 5: Fix Regex Query Issues
Analyze regex performance:
```javascript // Slow regex with leading wildcard db.users.find({ email: /.*example.com$/ }) // This scans all documents - cannot use index
// Faster: trailing wildcard db.users.find({ email: /^user.*$/ }) // Can use prefix index
// Create supporting index db.users.createIndex({ email: 1 })
// Even better: use text search or regex alternatives db.users.find({ email: { $regex: "^user" } }) // Uses index ```
Step 6: Optimize Sort Operations
Fix sort exceeding memory limit:
Error: Sort operation used more than 33554432 bytes
MongoError: Sort exceeded memory limitCreate index for sort:
```javascript // Query with sort db.orders.find({ status: "pending" }).sort({ createdAt: -1 })
// Create compound index db.orders.createIndex({ status: 1, createdAt: -1 })
// Verify in explain db.orders.explain("executionStats") .find({ status: "pending" }) .sort({ createdAt: -1 }) // Should show no SORT stage (using index for sort) ```
Step 7: Optimize Projection
Reduce returned data:
```javascript // Bad: Return all fields db.users.find({ status: "active" })
// Good: Return only needed fields db.users.find( { status: "active" }, { name: 1, email: 1, _id: 0 } )
// Covered query (no document fetch) db.users.createIndex({ status: 1, name: 1, email: 1 }) db.users.find( { status: "active" }, { name: 1, email: 1, _id: 0 } ).explain() // Should show "totalDocsExamined: 0" (PROJECTION_COVERED) ```
Step 8: Monitor and Tune
Continuous monitoring:
```javascript // Top slow queries db.system.profile.aggregate([ { $match: { millis: { $gt: 100 } } }, { $group: { _id: "$ns", avgTime: { $avg: "$millis" }, maxTime: { $max: "$millis" }, count: { $sum: 1 }, sampleQuery: { $first: "$command" } }}, { $sort: { avgTime: -1 } }, { $limit: 10 } ])
// Index usage statistics db.collection.aggregate([{ $indexStats: {} }]) ```
Verification
Verify query optimization:
```javascript // 1. Explain shows IXSCAN let plan = db.collection.explain("executionStats").find({...}) plan.queryPlanner.winningPlan.stage === "IXSCAN"
// 2. Docs examined close to returned plan.executionStats.totalDocsExamined <= plan.executionStats.nReturned * 2
// 3. Execution time acceptable (< 100ms) plan.executionStats.executionTimeMillis < 100
// 4. Profiler shows improvement db.system.profile.find({ ns: "mydb.collection", millis: { $gt: 100 } }).count() // Should decrease ```
Verify index effectiveness:
```javascript // Index being used db.collection.aggregate([{ $indexStats: {} }]) // Check "accesses.ops" - should be increasing for useful indexes
// Remove unused indexes db.collection.getIndexes().forEach(idx => { let stats = db.collection.aggregate([{ $indexStats: {} }]) .toArray().find(s => s.name === idx.name) if (stats && stats.accesses.ops === 0 && idx.name !== "_id_") { print("Unused: " + idx.name) } }) ```
Common Pitfalls
- Creating too many indexes - Each index adds write overhead
- Not checking index usage - Unused indexes waste space and CPU
- Wrong index field order - Must match query patterns (ESR rule)
- Ignoring projection - Returning unnecessary fields wastes resources
- Not monitoring profiler - Slow queries can silently accumulate
Best Practices
- Enable profiler with 100ms threshold during performance tuning
- Use explain("executionStats") for detailed analysis
- Follow ESR rule for compound index design
- Filter early in aggregation pipelines
- Use covered queries when possible
- Monitor index usage and remove unused indexes
- Set query timeouts in applications
- Review slow queries weekly
Related Issues
- MongoDB Index Build Failed
- MongoDB Memory Limit Exceeded
- MongoDB Aggregation Error
- MongoDB WiredTiger Cache Full