What's Actually Happening
Couchbase N1QL queries timeout before completing. Query returns timeout error instead of results.
The Error You'll See
```bash $ cbq -e couchbase://localhost -u user -p pass
cbq> SELECT * FROM bucket WHERE type='user';
Error: timeout ```
Query timeout:
{
"errors": [
{
"code": 1080,
"msg": "Timeout exceeded before query execution completed"
}
]
}Index scan error:
{
"errors": [
{
"code": 4000,
"msg": "No index available for query"
}
]
}Memory error:
{
"errors": [
{
"code": 5000,
"msg": "Memory quota exceeded"
}
]
}Why This Happens
- 1.Missing index - Query must scan full bucket
- 2.Complex query - Too many joins or subqueries
- 3.Large dataset - Querying millions of documents
- 4.Insufficient resources - Cluster memory/CPU overloaded
- 5.Network latency - Slow inter-node communication
- 6.Index not covering - Fetch phase required after index scan
Step 1: Check Query Service Status
```bash # Check Couchbase cluster status: curl -u admin:pass http://localhost:8091/pools/default
# Check query service: curl -u admin:pass http://localhost:8093/admin/v2/ready
# Check query nodes: curl -u admin:pass http://localhost:8091/pools/default/services | jq '.query'
# View query service config: curl -u admin:pass http://localhost:8093/admin/v2/config
# Query service metrics: curl -u admin:pass http://localhost:8093/admin/v2/metrics
# Check query threads: curl -u admin:pass http://localhost:8093/admin/v2/threads
# Active queries: curl -u admin:pass http://localhost:8093/admin/v2/active_requests | jq .
# Prepared statements: curl -u admin:pass http://localhost:8093/admin/v2/prepareds | jq . ```
Step 2: Analyze Query Plan
```sql -- Explain query to see execution plan: EXPLAIN SELECT * FROM bucket WHERE type = 'user';
-- Check for: -- 1. Index scan vs full scan -- 2. Number of items scanned -- 3. Join strategies -- 4. Sort operations
-- View detailed plan: EXPLAIN SELECT meta().id, * FROM bucket WHERE type = 'user' AND status = 'active';
-- Check index usage: SELECT * FROM system:active_requests;
-- Query profiling: curl -u admin:pass http://localhost:8093/query/service \ -d 'statement=SELECT * FROM bucket LIMIT 1&profile=timings'
-- Profile output shows time spent in each phase ```
Step 3: Create Missing Indexes
```sql -- Check existing indexes: SELECT * FROM system:indexes WHERE keyspace_id = 'bucket';
-- Or: SHOW INDEXES ON bucket;
-- Create primary index (fallback, avoid for production): CREATE PRIMARY INDEX ON bucket;
-- Create secondary index for common queries: CREATE INDEX idx_type ON bucket(type);
-- Create covering index (includes all needed fields): CREATE INDEX idx_type_status ON bucket(type, status) INCLUDE (name, email);
-- Create composite index: CREATE INDEX idx_user_active ON bucket(type, status, created);
-- Partial index: CREATE INDEX idx_active_users ON bucket(name) WHERE type = 'user' AND status = 'active';
-- Array index: CREATE INDEX idx_tags ON bucket(DISTINCT ARRAY tag FOR tag IN tags END);
-- Check index build status: SELECT * FROM system:indexes WHERE state != 'online';
-- Build deferred indexes: BUILD INDEX ON bucket(idx_type, idx_status); ```
Step 4: Optimize Query Syntax
```sql -- Avoid SELECT * for large documents: -- BAD: SELECT * FROM bucket WHERE type = 'user';
-- GOOD: SELECT id, name, email FROM bucket WHERE type = 'user';
-- Use covered indexes: -- Include all projected fields in index CREATE INDEX idx_covered ON bucket(type) INCLUDE (id, name, email);
-- Avoid functions on indexed columns: -- BAD: SELECT * FROM bucket WHERE LOWER(type) = 'user';
-- GOOD: SELECT * FROM bucket WHERE type = 'user';
-- Use IN clause efficiently: SELECT * FROM bucket WHERE type IN ['user', 'admin'];
-- Optimize joins: -- Ensure join keys are indexed CREATE INDEX idx_ref ON bucket(ref_id);
-- Use subquery instead of large IN: SELECT * FROM bucket WHERE ref_id IN ( SELECT RAW id FROM bucket WHERE type = 'reference' );
-- Limit results: SELECT * FROM bucket WHERE type = 'user' LIMIT 100 OFFSET 0; ```
Step 5: Increase Query Timeout
```sql -- Set query timeout in request: SELECT * FROM bucket WHERE type = 'user' OPTION (timeout = 120s);
-- Via REST API: curl -u admin:pass http://localhost:8093/query/service \ -d 'statement=SELECT * FROM bucket&timeout=120s'
-- Check/set cluster query timeout: curl -u admin:pass http://localhost:8093/admin/v2/config \ -d 'queryTimeout=120'
-- In connection string: couchbase://localhost?query_timeout=120000
-- SDK timeout setting: cluster.query('SELECT ...', QueryOptions.timeout=120))
-- View current settings: curl -u admin:pass http://localhost:8093/admin/v2/config | jq . ```
Step 6: Check Cluster Resources
```bash # Check cluster overview: curl -u admin:pass http://localhost:8091/pools/default | jq .
# Check memory quota: curl -u admin:pass http://localhost:8091/pools/default | jq '.storageTotals.ram'
# Check query service memory: curl -u admin:pass http://localhost:8093/admin/v2/config | jq '.memoryQuota'
# View bucket stats: curl -u admin:pass http://localhost:8091/pools/default/buckets/bucket/stats | jq .
# Check query node stats: curl -u admin:pass http://localhost:8093/admin/v2/stats
# Monitor active requests: SELECT * FROM system:active_requests WHERE requestTime > DATE_ADD_STR(NOW_STR(), -5, 'minute');
# Check for blocked queries: SELECT * FROM system:completed_requests WHERE state = 'timeout' ORDER BY requestTime DESC LIMIT 10;
# Cluster resource metrics: curl -u admin:pass http://localhost:8091/pools/default | \ jq '{ram: .storageTotals.ram, hdd: .storageTotals.hdd}' ```
Step 7: Handle Large Datasets
```sql -- Use pagination for large result sets: SELECT * FROM bucket WHERE type = 'user' ORDER BY name LIMIT 100 OFFSET 0;
-- Use keyset pagination (more efficient): SELECT * FROM bucket WHERE type = 'user' AND name > 'last_name' ORDER BY name LIMIT 100;
-- Partition large queries: -- Query subset by time range: SELECT * FROM bucket WHERE type = 'user' AND created BETWEEN '2024-01-01' AND '2024-01-31';
-- Use covering index to avoid fetch: CREATE INDEX idx_user_name ON bucket(type, name) INCLUDE (email, status);
SELECT name, email, status FROM bucket WHERE type = 'user'; -- No fetch needed, index has all data
-- Avoid cross-bucket queries: -- Query each bucket separately if possible ```
Step 8: Check Network and Replication
```bash # Check node status: curl -u admin:pass http://localhost:8091/pools/default | jq '.nodes[]'
# Check inter-node latency: # Ping between data nodes ping node2.couchbase.local
# Check XDCR (cross-datacenter replication): curl -u admin:pass http://localhost:8091/pools/default/remoteClusters
# Check rebalance status: curl -u admin:pass http://localhost:8091/pools/default/rebalanceProgress
# Check auto-failover: curl -u admin:pass http://localhost:8091/settings/autoFailover
# Check cluster map: curl -u admin:pass http://localhost:8091/pools/default/buckets/bucket | jq '.vBucketServerMap'
# Test network throughput between nodes: iperf3 -c node2
# Check for network partition: # Look for nodes with status != healthy ```
Step 9: Monitor Query Performance
```sql -- Enable query monitoring: -- Check completed requests: SELECT * FROM system:completed_requests ORDER BY elapsedTime DESC LIMIT 10;
-- Find slow queries: SELECT statement, elapsedTime, resultCount, resultSize FROM system:completed_requests WHERE elapsedTime > 10000000 -- 10 seconds in microseconds ORDER BY elapsedTime DESC;
-- Query request metrics: SELECT statement, phaseOperators, phaseTimes FROM system:completed_requests WHERE requestTime > DATE_ADD_STR(NOW_STR(), -1, 'hour');
-- Index usage stats: SELECT * FROM system:indexes WHERE keyspace_id = 'bucket';
-- Check prepared statement cache: SELECT * FROM system:prepareds;
-- Monitor via REST: curl -u admin:pass http://localhost:8093/admin/v2/metrics | jq . ```
Step 10: Couchbase Query Verification Script
```bash # Create verification script: cat << 'EOF' > /usr/local/bin/check-couchbase-query.sh #!/bin/bash
HOST=${1:-"localhost"} USER=${2:-"admin"} PASS=${3:-"password"} BUCKET=${4:-"bucket"}
echo "=== Query Service Status ===" curl -s -u $USER:$PASS http://$HOST:8093/admin/v2/ready || echo "Query service not ready"
echo "" echo "=== Query Node Config ===" curl -s -u $USER:$PASS http://$HOST:8093/admin/v2/config | jq '{queryTimeout, memoryQuota, maxParallelism}'
echo "" echo "=== Active Queries ===" curl -s -u $USER:$PASS http://$HOST:8093/admin/v2/active_requests | jq '.[] | {statement, elapsedTime}'
echo "" echo "=== Recent Timeouts ===" cbq -e couchbase://$HOST -u $USER -p $PASS -q -s "SELECT statement, elapsedTime FROM system:completed_requests WHERE state = 'timeout' LIMIT 5" 2>/dev/null
echo "" echo "=== Indexes on $BUCKET ===" cbq -e couchbase://$HOST -u $USER -p $PASS -q -s "SELECT name, state, numDocs FROM system:indexes WHERE keyspace_id = '$BUCKET'" 2>/dev/null
echo "" echo "=== Bucket Stats ===" curl -s -u $USER:$PASS "http://$HOST:8091/pools/default/buckets/$BUCKET/stats" | jq '.op.samples | {ops: .ops[0], items: .items[0]}'
echo "" echo "=== Cluster Memory ===" curl -s -u $USER:$PASS http://$HOST:8091/pools/default | jq '.storageTotals.ram'
echo "" echo "=== Recommendations ===" echo "1. Create indexes for query predicates" echo "2. Use covering indexes to avoid fetch" echo "3. Avoid SELECT * on large documents" echo "4. Increase query timeout if needed" echo "5. Check cluster memory and CPU" echo "6. Use pagination for large result sets" echo "7. Monitor completed_requests for slow queries" EOF
chmod +x /usr/local/bin/check-couchbase-query.sh
# Usage: /usr/local/bin/check-couchbase-query.sh localhost admin password mybucket ```
Couchbase Query Timeout Checklist
| Check | Expected |
|---|---|
| Index exists | Created for query predicates |
| Index online | State is "online" |
| Query plan | Uses index scan |
| Timeout | Adequate for query |
| Cluster memory | Within quota |
| Network | Low latency |
| Covered query | No fetch phase |
Verify the Fix
```bash # After fixing Couchbase query timeout
# 1. Check index online SELECT * FROM system:indexes WHERE keyspace_id = 'bucket'; // All indexes state: online
# 2. Explain query EXPLAIN SELECT * FROM bucket WHERE type = 'user'; // Uses index scan
# 3. Run query SELECT * FROM bucket WHERE type = 'user' LIMIT 10; // Returns results quickly
# 4. Check query time SELECT statement, elapsedTime FROM system:completed_requests ORDER BY requestTime DESC LIMIT 1; // elapsedTime < 1s
# 5. Monitor active requests curl -u admin:pass http://localhost:8093/admin/v2/active_requests // No long-running queries
# 6. Verify cluster healthy curl -u admin:pass http://localhost:8091/pools/default // status: healthy ```
Related Issues
- [Fix MongoDB Index Not Used in Query](/articles/fix-mongodb-index-not-used-in-query)
- [Fix MySQL Slow Query](/articles/fix-mysql-slow-query)
- [Fix Redis High Latency](/articles/fix-redis-high-latency)