Introduction
When Redis reaches its configured maxmemory limit and has an eviction policy that prevents key removal (like noeviction), it returns OOM errors for write commands. This protects existing data but blocks new writes, causing application failures.
Symptoms
- Redis client errors:
OOM command not allowed when used memory > 'maxmemory'. redis-cli INFO memoryshowsused_memorynearmaxmemorysetting- Applications fail to write new cache entries
used_memory_humanapproaches configured limit- Error responses for SET, HSET, LPUSH, and other write commands
- Monitoring alerts on memory usage percentage
Common Causes
maxmemoryset too low for workload- Eviction policy set to
noeviction(default in some configs) - Memory leak from unexpired keys accumulating
- Large data structures (hashes, lists) consuming excessive memory
- No TTL set on volatile cache keys
- Redis used as database instead of cache without eviction
Step-by-Step Fix
- 1.Check current memory usage and settings:
- 2.```bash
- 3.redis-cli INFO memory
- 4.# Look for:
- 5.# used_memory: 1073741824
- 6.# maxmemory: 1073741824
- 7.# maxmemory_policy: noeviction
- 8.
` - 9.Check memory usage by keys:
- 10.```bash
- 11.# Sample random keys to find memory hogs
- 12.redis-cli --bigkeys
# Or use rdb-tools for detailed analysis rdb -c memory /var/lib/redis/dump.rdb -largest 100 > memory.csv ```
- 1.Immediately increase maxmemory (temporary fix):
- 2.```bash
- 3.redis-cli CONFIG SET maxmemory 2gb
- 4.
` - 5.Change eviction policy to allow key removal:
- 6.```bash
- 7.# allkeys-lru: Remove least recently used keys
- 8.# volatile-lru: Remove LRU keys with TTL
- 9.# allkeys-lfu: Remove least frequently used
- 10.# volatile-lfu: Remove LFU keys with TTL
- 11.redis-cli CONFIG SET maxmemory-policy allkeys-lru
- 12.
` - 13.Set eviction policy persistently in
redis.conf: - 14.
` - 15.maxmemory 2gb
- 16.maxmemory-policy allkeys-lru
- 17.maxmemory-samples 10
- 18.
` - 19.Force immediate eviction by triggering a write:
- 20.```bash
- 21.redis-cli SET temp:trigger evict
- 22.# This will trigger eviction of old keys
- 23.
` - 24.Analyze and optimize large keys:
- 25.```bash
- 26.# Check size of specific keys
- 27.redis-cli MEMORY USAGE my:large:hash
# If using hashes with many fields, shard them: # Instead of user:1000 (all fields) # Use user:1000:profile, user:1000:settings, etc. ```
- 1.Set TTL on cache entries that should expire:
- 2.```bash
- 3.# Add TTL to existing keys without expiration
- 4.redis-cli --scan --pattern "cache:*" | xargs -I {} redis-cli EXPIRE {} 3600
- 5.
` - 6.Enable active expiration for faster cleanup:
- 7.```bash
- 8.redis-cli CONFIG SET active-expire-effort 10
- 9.
` - 10.Monitor memory after fix:
- 11.```bash
- 12.# Watch memory in real-time
- 13.watch -n 1 'redis-cli INFO memory | grep -E "used_memory_human|maxmemory_human|evicted_keys"'
- 14.
`
Prevention
- Always set appropriate
maxmemorybased on workload analysis - Use
allkeys-lruorvolatile-lrufor cache workloads - Set TTL on all volatile cache keys
- Monitor memory usage with
used_memory/maxmemoryratio - Use
MEMORY USAGEto audit large keys regularly - Implement cache warming strategies instead of storing unlimited data
- Consider Redis Cluster for horizontal scaling when single node memory insufficient
- Enable
maxmemory-policymonitoring alerts - Regular
redis-cli --bigkeysanalysis in CI/CD
Advanced Memory Tuning
- Use 32-bit Redis for small datasets (<4GB) to save memory
- Enable
activedefragfor memory defragmentation - Consider using Redis Streams instead of large Lists
- Use RedisJSON module for efficient JSON storage
- Implement client-side caching (Redis 6+) to reduce server memory