What's Actually Happening

Redis runs out of available memory when data exceeds the configured limit. Without eviction policies, Redis rejects write commands with OOM errors. Applications fail to cache or store data.

The Error You'll See

Redis command rejected:

```bash $ redis-cli SET user:1001 "data"

(error) OOM command not allowed when used memory > 'maxmemory'. ```

Redis log:

```bash $ tail /var/log/redis/redis-server.log

[12345] 16 Apr 00:02:00.123 # WARNING: Redis is running out of memory [12345] 16 Apr 00:02:00.456 # Out of memory allocated 1073741824 bytes ```

Application error:

bash
RedisException: OOM command not allowed when used memory > 'maxmemory'
Command: SET
Key: user:1001

INFO memory output:

```bash $ redis-cli INFO memory

# Memory used_memory:1073741824 used_memory_human:1.00G used_memory_peak:1073741824 used_memory_peak_human:1.00G maxmemory:1073741824 maxmemory_human:1.00G maxmemory_policy:noeviction ```

Why This Happens

  1. 1.No eviction policy - maxmemory_policy set to noeviction
  2. 2.maxmemory too low - Memory limit too small for workload
  3. 3.Large values - Storing large objects in Redis
  4. 4.Memory fragmentation - allocator fragmentation overhead
  5. 5.Key expiration not set - Keys never expire, accumulate
  6. 6.Unbounded growth - No limits on data growth

Step 1: Check Redis Memory Status

```bash # Check memory usage redis-cli INFO memory

# Key metrics: # used_memory - current memory used # maxmemory - configured limit (0 = unlimited) # maxmemory_policy - eviction policy

# Quick memory summary redis-cli INFO memory | grep -E "used_memory_human|maxmemory_human|maxmemory_policy"

# Check if maxmemory is set redis-cli CONFIG GET maxmemory

# Check eviction policy redis-cli CONFIG GET maxmemory-policy

# Check memory fragmentation ratio redis-cli INFO memory | grep mem_fragmentation_ratio # >1.5 indicates high fragmentation

# Get memory stats redis-cli MEMORY STATS ```

Step 2: Configure maxmemory Limit

```bash # Set maxmemory limit redis-cli CONFIG SET maxmemory 2gb

# Or in redis.conf # /etc/redis/redis.conf maxmemory 2gb

# Common units: # maxmemory 1kb - 1 kilobyte # maxmemory 1mb - 1 megabyte # maxmemory 1gb - 1 gigabyte

# Calculate appropriate size: # 50-70% of available RAM # Leave room for OS, fragmentation, other processes

# For system with 8GB RAM: maxmemory 4gb # 50% of total

# Restart Redis if editing redis.conf sudo systemctl restart redis

# Verify setting redis-cli CONFIG GET maxmemory ```

Step 3: Enable Eviction Policy

```bash # Available policies: # noeviction - Return error when memory limit reached (default) # allkeys-lru - Evict any key using LRU algorithm # volatile-lru - Evict keys with expire set using LRU # allkeys-lfu - Evict any key using LFU algorithm (Redis 4.0+) # volatile-lfu - Evict keys with expire set using LFU # allkeys-random- Evict random keys # volatile-random- Evict random keys with expire set # volatile-ttl - Evict keys with shortest TTL

# Recommended for cache: allkeys-lru redis-cli CONFIG SET maxmemory-policy allkeys-lru

# Recommended for cache with TTL: volatile-ttl redis-cli CONFIG SET maxmemory-policy volatile-ttl

# For session store with expiring keys: volatile-lru redis-cli CONFIG SET maxmemory-policy volatile-lru

# In redis.conf maxmemory-policy allkeys-lru

# Apply without restart redis-cli CONFIG SET maxmemory-policy allkeys-lru

# Verify policy redis-cli CONFIG GET maxmemory-policy ```

Step 4: Set Key Expiration Times

```bash # Always set expiration for cache keys

# Using SET with EX (expire in seconds) redis-cli SET user:1001 "data" EX 3600 # Expires in 1 hour

# Using SETEX redis-cli SETEX user:1001 3600 "data"

# Using EXPIRE on existing key redis-cli EXPIRE user:1001 3600

# Using EXPIREAT (expire at timestamp) redis-cli EXPIREAT user:1001 1716234567

# Check TTL remaining redis-cli TTL user:1001 # Returns seconds remaining, -1 = no expiry, -2 = key doesn't exist

# Set expiration pattern in application code # All cache keys should have TTL SET cache:item:123 "value" EX 300 SET session:abc123 "session_data" EX 1800 SET rate_limit:user:456 "10" EX 60 ```

Step 5: Analyze Memory Usage by Keys

```bash # Find keys using most memory redis-cli --bigkeys

# Sample output: # Biggest string found 'user:1001' has 1048576 bytes # Biggest list found 'queue:jobs' has 10000 items # Biggest set found 'tags:popular' has 5000 members # Biggest hash found 'user:profile:1001' has 100 fields

# Analyze specific key memory redis-cli MEMORY USAGE user:1001

# Debug object info redis-cli DEBUG OBJECT user:1001

# List all keys (caution: slow on large datasets) redis-cli KEYS '*'

# Count keys redis-cli DBSIZE

# Scan keys incrementally redis-cli SCAN 0 COUNT 1000 ```

Step 6: Optimize Data Structures

```bash # Use appropriate data types to reduce memory

# WRONG: Large string for structured data SET user:1001 '{"name":"John","email":"john@example.com","age":30,"city":"NYC"}' # Memory: overhead + full JSON string

# CORRECT: Use hash for structured data HSET user:1001 name "John" email "john@example.com" age 30 city "NYC" # Memory: more efficient for multiple fields

# Use ziplist for small hashes (default up to 512 fields) hash-max-ziplist-entries 512 hash-max-ziplist-value 64

# Use compressed lists for small lists list-max-ziplist-size -2

# Use intset for sets of integers set-max-intset-entries 512

# Check encoding of key redis-cli OBJECT ENCODING user:1001 # Returns: ziplist, hashtable, intset, etc.

# Configure in redis.conf hash-max-ziplist-entries 1000 hash-max-ziplist-value 128 list-max-ziplist-size -2 set-max-intset-entries 1000 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 ```

Step 7: Reduce Memory Fragmentation

```bash # Check fragmentation ratio redis-cli INFO memory | grep mem_fragmentation_ratio

# Normal: 1.0 - 1.5 # High: > 1.5 indicates fragmentation

# Use memory allocator that reduces fragmentation # In redis.conf (requires compile-time option) # --enable-jemalloc (recommended) # --enable-tcmalloc

# Trigger memory defragmentation (Redis 4.0+) redis-cli MEMORY PURGE

# Enable active defragmentation (Redis 4.0+) redis-cli CONFIG SET activedefrag yes

# Configure defragmentation thresholds redis-cli CONFIG SET active-defrag-ignore-bytes 100mb redis-cli CONFIG SET active-defrag-threshold-lower 10 redis-cli CONFIG SET active-defrag-threshold-upper 100 redis-cli CONFIG SET active-defrag-cycle-min 1 redis-cli CONFIG SET active-defrag-cycle-max 25

# In redis.conf activedefrag yes active-defrag-ignore-bytes 100mb active-defrag-threshold-lower 10 active-defrag-threshold-upper 100 ```

Step 8: Use Key Naming Patterns

```bash # Use consistent key naming for easy management

# Good patterns with automatic cleanup: # namespace:type:id cache:user:1001 session:web:abc123 rate:api:user:456

# Use prefix for batch operations # Delete all cache keys for user redis-cli KEYS 'cache:user:*' | xargs redis-cli DEL

# Or use SCAN for large datasets redis-cli --scan --pattern 'cache:user:*' | xargs -L 1000 redis-cli DEL

# Use Redis 6.2+ UNLINK for non-blocking delete redis-cli UNLINK cache:user:1001

# Batch delete with UNLINK redis-cli --scan --pattern 'cache:old:*' | xargs -L 1000 redis-cli UNLINK ```

Step 9: Monitor Memory Usage

```bash # Create monitoring script cat << 'EOF' > /usr/local/bin/redis_memory_check.sh #!/bin/bash REDIS_CLI="redis-cli"

USED=$(redis-cli INFO memory | grep used_memory: | cut -d: -f2) MAX=$(redis-cli INFO memory | grep maxmemory: | cut -d: -f2)

if [ "$MAX" -gt 0 ]; then PERCENT=$((USED * 100 / MAX)) if [ "$PERCENT" -gt 90 ]; then echo "ALERT: Redis memory at $PERCENT% (${USED}/${MAX})" # Send alert mail -s "Redis Memory Alert" admin@company.com <<< \ "Redis memory usage: $PERCENT%" fi fi EOF

chmod +x /usr/local/bin/redis_memory_check.sh

# Add to cron echo "*/5 * * * * root /usr/local/bin/redis_memory_check.sh" > /etc/cron.d/redis-memory

# Real-time monitoring redis-cli --stat

# Watch memory stats watch -n 5 'redis-cli INFO memory | grep -E "used_memory_human|maxmemory_human"' ```

Step 10: Scale Redis Memory

```bash # When single instance is insufficient

# Option 1: Increase server memory # Upgrade to larger instance

# Option 2: Use Redis Cluster # Split data across multiple nodes

# Redis Cluster setup (6 nodes minimum) # 3 masters + 3 replicas # Each node handles subset of keys

redis-cli --cluster create \ 10.0.0.1:6379 10.0.0.2:6379 10.0.0.3:6379 \ 10.0.0.4:6379 10.0.0.5:6379 10.0.0.6:6379 \ --cluster-replicas 1

# Option 3: Use Redis sharding at application level # Route keys to different instances based on key pattern

# Option 4: Use external storage for large values # Store references in Redis, actual data elsewhere SET user:1001:avatar_ref "s3://bucket/user/1001/avatar.jpg" ```

Redis Eviction Policy Reference

PolicyUse CaseBehavior
noevictionPersistent dataReject writes, keep all keys
allkeys-lruGeneral cacheEvict least recently used
volatile-lruCache with TTLEvict expiring keys, LRU
volatile-ttlTime-based cacheEvict shortest TTL first
allkeys-lfuAccess-frequency cacheEvict least frequently used
allkeys-randomRandom evictionEvict random keys

Verify the Fix

```bash # After configuring maxmemory and eviction

# 1. Check memory policy is set redis-cli CONFIG GET maxmemory-policy # Should not be "noeviction"

# 2. Verify maxmemory limit redis-cli CONFIG GET maxmemory

# 3. Test write succeeds redis-cli SET test:key "test_value" # Should return OK, not OOM error

# 4. Check memory stats redis-cli INFO memory | grep -E "used_memory|maxmemory"

# 5. Monitor eviction stats redis-cli INFO stats | grep evicted # Shows number of keys evicted

# 6. Check key TTLs are set redis-cli TTL some:key # Should show positive number, not -1

# 7. Verify fragmentation is reasonable redis-cli INFO memory | grep mem_fragmentation_ratio # Should be < 1.5

# 8. Run memory analysis redis-cli --bigkeys # Identify any large keys to optimize ```

  • [Fix Redis Connection Refused](/articles/fix-redis-connection-refused)
  • [Fix Redis Key Not Found](/articles/fix-redis-key-not-found)
  • [Fix Redis Persistence Failed](/articles/fix-redis-persistence-failed)