The Problem

Your Redis instance reports using far more memory than the size of your data. The memory fragmentation ratio shows concerning values, and in worst cases, the operating system kills Redis with an Out-Of-Memory (OOM) error. Applications experience sudden disconnections and data loss when this happens.

Understanding Memory Fragmentation

Redis memory usage consists of:

  1. 1.Used memory - Actual data stored
  2. 2.RSS (Resident Set Size) - Memory allocated by the OS
  3. 3.Fragmented memory - Gaps between allocations

The fragmentation ratio is RSS / Used Memory. A ratio of 1.0 is ideal. Above 1.5 indicates problematic fragmentation.

Diagnosis Commands

Check Memory Stats

bash
redis-cli INFO memory

Key metrics to examine:

bash
used_memory:1073741824           # Data size
used_memory_rss:2147483648       # OS allocated memory
mem_fragmentation_ratio:2.00     # Should be near 1.0
mem_allocator:jemalloc-5.2.1    # Allocator in use

Detailed Memory Breakdown

bash
redis-cli MEMORY STATS

This shows memory by category:

bash
peak.allocated:2147483648
total.allocated:1073741824
db.0:overhead.hashtable.main:10485760
db.0:overhead.hashtable.expires:1048576

Check Fragmentation by Data Type

bash
redis-cli MEMORY DOCTOR

Redis provides recommendations based on current state.

What Causes Fragmentation

Cause 1: Frequent Key Deletion

When keys are deleted, the freed memory creates gaps. If new keys don't fit these gaps, fragmentation increases.

bash
# Check delete rate
redis-cli INFO stats | grep -i total_commands_processed
redis-cli INFO stats | grep -i del

Cause 2: Varying Key Sizes

Mixing small and large keys prevents efficient memory reuse.

bash
# Find large keys
redis-cli --bigkeys

Cause 3: Long-Running Instance

Memory allocator fragmentation accumulates over time.

bash
# Check uptime
redis-cli INFO server | grep uptime_in_days

Cause 4: Wrong Allocator

Redis supports different allocators (libc, jemalloc, tcmalloc). Jemalloc typically has better fragmentation characteristics.

bash
redis-cli INFO memory | grep mem_allocator

Solutions

Solution 1: Active Defragmentation

Redis 4.0+ supports active defragmentation. Enable it:

bash
# Enable active defragmentation
redis-cli CONFIG SET activedefrag yes

Configure thresholds in redis.conf:

conf
# Enable active defragmentation
active-defrag-ignore-bytes 100mb
active-defrag-threshold-lower 10
active-defrag-threshold-upper 100
active-defrag-cycle-min 1
active-defrag-cycle-max 25
active-defrag-max-scan-fields 1000

Parameters explained:

  • ignore-bytes: Minimum fragmentation to start defrag
  • threshold-lower: Start defrag when fragmentation exceeds this percentage
  • threshold-upper: Maximum effort at this fragmentation level
  • cycle-min/max: CPU percentage range for defrag process

Monitor defragmentation:

bash
redis-cli INFO memory | grep -E "active_defrag|mem_fragmentation"

Solution 2: Restart with Jemalloc

If using libc allocator, switching to jemalloc helps:

```bash # Check current allocator redis-cli INFO memory | grep mem_allocator

# If libc, rebuild Redis with jemalloc make MALLOC=jemalloc ```

Solution 3: Reduce Memory Usage

Lower overall memory usage reduces fragmentation overhead:

```bash # Enable compression for lists redis-cli CONFIG SET list-compress-depth 1 redis-cli CONFIG SET list-max-ziplist-size -2

# Use ziplist for small hashes redis-cli CONFIG SET hash-max-ziplist-entries 512 redis-cli CONFIG SET hash-max-ziplist-value 64

# Use intset for small sets of integers redis-cli CONFIG SET set-max-intset-entries 512 ```

Solution 4: Periodic Restart (Legacy)

For Redis versions before 4.0, or when defrag isn't effective:

bash
# Use Replica failover for zero-downtime restart
# 1. Promote replica to master
# 2. Restart old master
# 3. Let it resync as new replica

With Sentinel:

bash
redis-cli -p 26379 SENTINEL FAILOVER mymaster
# Then restart the demoted node

Solution 5: Control Memory Limit

Set appropriate maxmemory to trigger early eviction:

bash
redis-cli CONFIG SET maxmemory 8gb
redis-cli CONFIG SET maxmemory-policy allkeys-lru

This prevents RSS from growing unbounded.

Monitoring Memory Fragmentation

Continuous Monitoring Script

```bash #!/bin/bash # Check fragmentation every minute while true; do RATIO=$(redis-cli INFO memory | grep mem_fragmentation_ratio | cut -d: -f2 | tr -d '\r') RSS=$(redis-cli INFO memory | grep used_memory_rss | cut -d: -f2 | tr -d '\r') USED=$(redis-cli INFO memory | grep used_memory: | cut -d: -f2 | tr -d '\r')

echo "$(date): Ratio=$RATIO RSS=$RSS Used=$USED"

# Alert if ratio exceeds 1.5 if (( $(echo "$RATIO > 1.5" | bc -l) )); then echo "ALERT: High fragmentation ratio: $RATIO" # Optionally trigger defrag redis-cli CONFIG SET activedefrag yes fi

sleep 60 done ```

Key Metrics to Track

```bash # Fragmentation ratio redis-cli --no-raw INFO memory | grep mem_fragmentation_ratio

# Peak memory usage redis-cli --no-raw INFO memory | grep used_memory_peak

# Allocator efficiency redis-cli --no-raw INFO memory | grep allocator_allocated ```

Verification

After applying fixes:

bash
# Check improvement
redis-cli INFO memory | grep -E "used_memory|mem_fragmentation_ratio"

Expected results: - Fragmentation ratio between 1.0 and 1.3 - RSS close to used memory - No OOM kills in logs

Prevention Strategies

  1. 1.Enable active defragmentation by default:
conf
# redis.conf
active-defrag-ignore-bytes 100mb
active-defrag-threshold-lower 10
activedefrag yes
  1. 1.Set memory limits:
conf
maxmemory <80% of available RAM>
maxmemory-policy allkeys-lru
  1. 1.Use consistent key sizing:

Avoid mixing very small and very large values in the same instance.

  1. 1.Monitor fragmentation trend:

Track mem_fragmentation_ratio over time to catch issues early.