The Problem

Your Redis instance shows intermittent latency spikes. Operations that normally complete in milliseconds occasionally take seconds. Applications timeout or experience degraded performance. The slow log contains entries you need to diagnose.

Understanding Redis Slow Log

The slow log records commands that exceed a configured execution time threshold. Unlike database slow logs that measure query duration, Redis measures the time to execute the command, excluding: - Network latency - Client processing - Command queuing time

This shows actual Redis CPU time.

Enabling and Configuring Slow Log

Check Current Configuration

bash
redis-cli CONFIG GET slowlog-*

Output:

bash
slowlog-log-slower-than 10000
slowlog-max-len 128

Configuration Parameters

  • slowlog-log-slower-than: Time in microseconds (default 10000 = 10ms)
  • slowlog-max-len: Maximum number of entries to keep

Configure for Better Visibility

```bash # Log commands slower than 1ms (1000 microseconds) redis-cli CONFIG SET slowlog-log-slower-than 1000

# Keep more entries for analysis redis-cli CONFIG SET slowlog-max-len 1000 ```

For comprehensive monitoring:

```bash # Log everything above 100 microseconds redis-cli CONFIG SET slowlog-log-slower-than 100

# Keep 5000 entries redis-cli CONFIG SET slowlog-max-len 5000 ```

Reading the Slow Log

View Slow Log Entries

bash
redis-cli SLOWLOG GET

Output format:

bash
1) 1) (integer) 42              # Unique ID
   2) (integer) 1609459200      # Unix timestamp
   3) (integer) 15000           # Execution time in microseconds
   4) 1) "KEYS"                 # Command
      2) "user:*"               # Arguments
   5) "10.0.0.5:54321"          # Client address
   6) "app-server-1"            # Client name (if set)

Get Specific Number of Entries

bash
redis-cli SLOWLOG GET 20

Get Slow Log Length

bash
redis-cli SLOWLOG LEN

Clear Slow Log

bash
redis-cli SLOWLOG RESET

Analyzing Slow Log Entries

Identify Problem Commands

bash
# Find slowest commands
redis-cli SLOWLOG GET 100 | grep -E "^\s+[0-9]+\)$" -A 3 | sort -t: -k2 -n

Common Slow Commands

#### KEYS Command

bash
1) 1) (integer) 15
   2) (integer) 1609459200
   3) (integer) 50000000
   4) 1) "KEYS"
      2) "*"

Problem: KEYS scans entire keyspace, O(N) complexity.

Solution: Use SCAN:

```bash # Instead of redis-cli KEYS "user:*"

# Use redis-cli SCAN 0 MATCH "user:*" COUNT 100 ```

#### Large SORT Operations

bash
1) 1) (integer) 16
   2) (integer) 1609459201
   3) (integer) 25000000
   4) 1) "SORT"
      2) "user:scores"
      3) "BY"
      4) "nosort"
      5) "GET"
      6) "#"

Problem: Sorting large sets is O(N*log(N)).

Solution: Use sorted sets:

```bash # Instead of SORT on a list redis-cli SORT mylist BY nosort

# Use ZSET with pre-sorted data redis-cli ZADD scores 100 user1 200 user2 redis-cli ZRANGE scores 0 -1 ```

#### Large SET Operations

bash
1) 1) (integer) 17
   2) (integer) 1609459202
   3) (integer) 15000000
   4) 1) "SUNION"
      2) "set1"
      3) "set2"
      4) "set3"

Problem: Set unions with large sets are O(N) per set.

Solution: Reduce set sizes or use incremental approach:

```bash # Instead of large union redis-cli SUNION set1 set2 set3

# Process in smaller chunks redis-cli SINTERSTORE temp set1 set2 redis-cli SUNIONSTORE result temp set3 redis-cli DEL temp ```

#### Lua Scripts

bash
1) 1) (integer) 18
   2) (integer) 1609459203
   3) (integer) 30000000
   4) 1) "EVAL"
      2) "local keys = redis.call('KEYS', ARGV[1]) ..."

Problem: Long-running Lua scripts block Redis.

Solution: Optimize scripts or break into smaller operations:

```lua -- Bad: Process all keys local keys = redis.call('KEYS', 'user:*') for i=1, #keys do redis.call('GET', keys[i]) end

-- Good: Process in batches local cursor = ARGV[1] local result = redis.call('SCAN', cursor, 'MATCH', 'user:*', 'COUNT', 100) -- Return cursor for next batch ```

#### HGETALL on Large Hashes

bash
1) 1) (integer) 19
   2) (integer) 1609459204
   3) (integer) 10000000
   4) 1) "HGETALL"
      2) "large:hash:with:many:fields"

Problem: Returns all fields, potentially megabytes of data.

Solution: Use HSCAN:

```bash # Instead of redis-cli HGETALL largehash

# Use redis-cli HSCAN largehash 0 COUNT 100 ```

Identifying Patterns

Find Most Frequent Slow Commands

bash
redis-cli SLOWLOG GET 100 | awk '
/^\s+[0-9]+\)$/ {next}
/\([integer\)/ {
    getline; getline; getline
    getline
    if (/^\s+1\) "([^"]+)"/) {
        match($0, /"([^"]+)"/, arr)
        print arr[1]
    }
}' | sort | uniq -c | sort -rn

Find Slow Commands by Client

bash
redis-cli SLOWLOG GET 50 | grep -E "^\s+[0-9]+\)$" -A 6 | grep -E "^\s+6\)" | sort | uniq -c

Find Slowest Operations

bash
redis-cli SLOWLOG GET 50 | awk '
/\s+[0-9]+\)\s+\(integer\)/ && NR%7 == 0 {
    getline
    if (/^\s+[0-9]+\)\s+\(integer/) {
        match($0, /[0-9]+/)
        print int(substr($0, RSTART, RLENGTH))
    }
}' | sort -rn | head -10

Latency Diagnosis

Use LATENCY Command

```bash # Enable latency monitoring redis-cli CONFIG SET latency-monitor-threshold 100

# Check latency events redis-cli LATENCY LATEST

# Get histogram for specific event redis-cli LATENCY HISTORY command

# Reset latency stats redis-cli LATENCY RESET ```

Debug Latency

```bash # Check for blocking operations redis-cli DEBUG SLEEP 0.01

# Measure latency redis-cli --latency

# Latency history redis-cli --latency-history ```

Solutions by Command Type

Avoid These Patterns

  1. 1.KEYS with wildcards - Always use SCAN
  2. 2.HGETALL on large hashes - Use HSCAN
  3. 3.SORT on large lists - Use sorted sets
  4. 4.Large SUNION/SINTER - Pre-compute or use smaller sets
  5. 5.Unbounded LRANGE - Always specify reasonable range
  6. 6.Long Lua scripts - Break into smaller chunks

Recommended Patterns

```bash # Pagination with SCAN redis-cli SCAN 0 MATCH user:* COUNT 100 redis-cli SCAN <cursor> MATCH user:* COUNT 100

# Limited ranges redis-cli LRANGE mylist 0 99 # First 100 elements

# Batch processing for i in $(seq 0 100 10000); do redis-cli ZRANGE scores $i $((i+99)) done ```

Monitoring Script

```bash #!/bin/bash # Monitor slow log continuously

LAST_ID=0

while true; do SLOWLOG=$(redis-cli SLOWLOG GET 10) echo "$SLOWLOG" | while read -r line; do if [[ $line =~ ^1\)\ \(integer\)\ ([0-9]+)$ ]]; then ID=${BASH_REMATCH[1]} if [ "$ID" -gt "$LAST_ID" ]; then echo "$(date): New slow entry ID $ID" fi fi done

LAST_ID=$(redis-cli SLOWLOG GET 1 | grep -E "^\s+1\)" -A 1 | tail -1 | awk '{print $2}') sleep 60 done ```

Verification

After making changes:

```bash # Reset slow log redis-cli SLOWLOG RESET

# Wait for traffic sleep 300

# Check slow log redis-cli SLOWLOG GET 20

# Should see fewer entries or different commands ```