# Redis Slow Commands
Symptoms
- Redis response times increasing significantly
- High latency on specific operations
- Application timeouts when calling Redis
- CPU spikes during certain operations
- Monitoring showing slow command logs
Root Causes
- 1.Large key operations - Operating on keys with many elements
- 2.Blocking commands - Commands that block other operations
- 3.Full data scans - KEYS command scanning entire database
- 4.O(N) complexity commands - Commands with linear time complexity
- 5.Memory fragmentation - Impacting operation performance
- 6.Network latency - Large payloads taking time to transfer
Understanding Redis Command Complexity
| Command | Complexity | Risk Level |
|---|---|---|
GET/SET | O(1) | Low |
HGET/HSET | O(1) | Low |
LPOP/RPOP | O(1) | Low |
LPUSH/RPUSH | O(1) | Low |
SADD | O(1) | Low |
ZADD | O(log N) | Medium |
LRANGE | O(N) | High (large ranges) |
SMEMBERS | O(N) | High (large sets) |
HGETALL | O(N) | High (large hashes) |
KEYS | O(N) | Very High |
DEL (large key) | O(N) | High |
SORT | O(N*log N) | High |
Diagnosis Steps
Step 1: Enable Slow Log
```bash # Check slow log configuration redis-cli CONFIG GET slowlog-log-slower-than redis-cli CONFIG GET slowlog-max-len
# Enable slow log (log commands slower than 10000 microseconds = 10ms) redis-cli CONFIG SET slowlog-log-slower-than 10000
# Set max entries to keep redis-cli CONFIG SET slowlog-max-len 128 ```
Step 2: View Slow Log
```bash # View slow commands redis-cli SLOWLOG GET 10
# Reset slow log redis-cli SLOWLOG RESET
# Get slow log length redis-cli SLOWLOG LEN ```
Slow log output format:
- 1: Unique ID
- 2: Unix timestamp
- 3: Execution time in microseconds
- 4: Command array
Step 3: Analyze Command Statistics
```bash # View commandstats redis-cli INFO commandstats
# Example output: # cmdstat_get:calls=100000,usec=50000,usec_per_call=0.50 # cmdstat_keys:calls=10,usec=5000000,usec_per_call=500000 ```
Step 4: Identify Large Keys
```bash # Find large keys redis-cli --bigkeys
# Check specific key size redis-cli DEBUG OBJECT keyname redis-cli MEMORY USAGE keyname
# For different data types redis-cli STRLEN keyname # String length redis-cli HLEN keyname # Hash field count redis-cli LLEN keyname # List length redis-cli SCARD keyname # Set member count redis-cli ZCARD keyname # Sorted set member count ```
Solutions
Solution 1: Replace KEYS with SCAN
The KEYS command scans the entire database and blocks all other operations.
Before (Bad):
# This blocks Redis for large databases
redis-cli KEYS "user:*"After (Good):
```bash # Use SCAN for incremental iteration redis-cli SCAN 0 MATCH "user:*" COUNT 100
# Iterate through all matches cursor=0 while true; do result=$(redis-cli SCAN $cursor MATCH "user:*" COUNT 100) cursor=$(echo "$result" | head -1) keys=$(echo "$result" | tail -n +2) echo "$keys" if [ "$cursor" = "0" ]; then break fi done ```
In application code:
// Node.js with ioredis
async function findKeys(pattern) {
const keys = [];
let cursor = '0';
do {
const result = await redis.scan(cursor, 'MATCH', pattern, 'COUNT', 100);
cursor = result[0];
keys.push(...result[1]);
} while (cursor !== '0');
return keys;
}# Python
def find_keys(pattern, count=100):
keys = []
cursor = '0'
while cursor != 0:
cursor, batch = r.scan(cursor=cursor, match=pattern, count=count)
keys.extend(batch)
return keysSolution 2: Replace SMEMBERS with SSCAN
```bash # Before (Bad for large sets) redis-cli SMEMBERS "large-set"
# After (Good) redis-cli SSCAN "large-set" 0 COUNT 100 ```
Solution 3: Replace HGETALL with HSCAN
```bash # Before (Bad for large hashes) redis-cli HGETALL "user:12345"
# After (Good) - get only needed fields redis-cli HGET "user:12345" "name" redis-cli HGET "user:12345" "email"
# Or use HSCAN for many fields redis-cli HSCAN "user:12345" 0 COUNT 100 ```
Solution 4: Avoid Large LRANGE
```bash # Before (Bad - gets entire list) redis-cli LRANGE "logs" 0 -1
# After (Good - paginate) redis-cli LRANGE "logs" 0 99 # First 100 redis-cli LRANGE "logs" 100 199 # Next 100
# Use pagination in application redis-cli LLEN "logs" # Get total length # Then paginate through results ```
Solution 5: Delete Large Keys Safely
```bash # Before (Bad - blocks Redis) redis-cli DEL "large-list"
# After (Good - use UNLINK for async delete) redis-cli UNLINK "large-list"
# Or delete incrementally # For lists while [ $(redis-cli LLEN "large-list") -gt 0 ]; do redis-cli LTRIM "large-list" 0 -1000 # Keep removing from end sleep 0.1 done redis-cli DEL "large-list"
# For sets while [ $(redis-cli SCARD "large-set") -gt 0 ]; do redis-cli SPOP "large-set" 100 sleep 0.1 done ```
Solution 6: Use Lazy Delete
Enable lazy deletion in Redis config:
# Enable lazy delete
redis-cli CONFIG SET lazyfree-lazy-eviction yes
redis-cli CONFIG SET lazyfree-lazy-expire yes
redis-cli CONFIG SET lazyfree-lazy-server-del yes
redis-cli CONFIG SET replica-lazy-flush yesSolution 7: Use Blocking Commands Carefully
Commands like BLPOP, BRPOP block until data is available:
```bash # Set reasonable timeout (not infinite) redis-cli BLPOP "queue" 5 # Wait max 5 seconds
# In application, handle timeout gracefully ```
Solution 8: Batch Operations
Use pipelining or transactions to reduce round trips:
```bash # Pipeline multiple commands redis-cli --pipe << EOF SET key1 value1 SET key2 value2 SET key3 value3 EOF
# Or use MULTI/EXEC redis-cli MULTI redis-cli SET key1 value1 redis-cli SET key2 value2 redis-cli EXEC ```
In application:
// Node.js pipeline
const pipeline = redis.pipeline();
pipeline.set('key1', 'value1');
pipeline.set('key2', 'value2');
pipeline.set('key3', 'value3');
const results = await pipeline.exec();# Python pipeline
pipe = r.pipeline()
pipe.set('key1', 'value1')
pipe.set('key2', 'value2')
pipe.set('key3', 'value3')
pipe.execute()Solution 9: Optimize Data Structure Choice
```bash # Use appropriate structures # Small objects -> Hash (more memory efficient) HMSET user:1 name "John" email "john@example.com" age "30"
# Ordered data -> Sorted Set (efficient range queries) ZADD leaderboard 100 "player1" ZADD leaderboard 95 "player2" ZREVRANGE leaderboard 0 9 # Top 10 players
# Unique items -> Set (efficient membership test) SADD unique:visitors "user123" SISMEMBER unique:visitors "user123" ```
Solution 10: Use Lua Scripts for Complex Operations
-- Atomic counter update
local current = redis.call('GET', KEYS[1])
if current == false then
current = 0
end
redis.call('SET', KEYS[1], current + ARGV[1])
return current + ARGV[1]redis-cli --eval increment.lua , counter 1Configuration for Performance
```ini # /etc/redis/redis.conf
# Slow log settings slowlog-log-slower-than 10000 # 10 milliseconds slowlog-max-len 128
# Lazy delete lazyfree-lazy-eviction yes lazyfree-lazy-expire yes lazyfree-lazy-server-del yes
# Memory optimization hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-size -2 set-max-intset-entries 512
# Disable dangerous commands rename-command KEYS "" rename-command FLUSHALL "" rename-command FLUSHDB "" ```
Monitoring and Alerts
```bash #!/bin/bash # redis_slow_commands_monitor.sh
# Get slow log count SLOW_COUNT=$(redis-cli SLOWLOG LEN)
if [ "$SLOW_COUNT" -gt 10 ]; then echo "WARNING: $SLOW_COUNT slow commands detected" redis-cli SLOWLOG GET 5 fi
# Check commandstats for slow commands redis-cli INFO commandstats | grep -E "usec_per_call=[0-9]{5,}" ```
Prevention Checklist
- [ ] Never use KEYS in production
- [ ] Use SCAN for key iteration
- [ ] Limit range operations (LRANGE, SMEMBERS)
- [ ] Use UNLINK instead of DEL for large keys
- [ ] Enable slow log monitoring
- [ ] Set up alerts for slow commands
- [ ] Choose appropriate data structures
- [ ] Use pipelining for batch operations
- [ ] Monitor command statistics
Related Errors
- [Redis Out of Memory](./fix-redis-out-of-memory)
- [Redis Key Eviction Issues](./fix-redis-key-eviction)