Introduction When Redis reaches its `maxmemory` limit with `noeviction` policy (the default), it returns `OOM command not allowed when used memory > maxmemory` for all write commands. This is a hard failure—no new data can be stored until memory is freed or the limit is raised.

Symptoms - All write commands return `(error) OOM command not allowed when used memory > maxmemory` - `INFO memory` shows `used_memory` exceeding `maxmemory` - Application logs filled with Redis write failures - Read commands continue to work normally - `INFO stats` shows `rejected_connections` increasing

Common Causes - Default `noeviction` policy does not automatically free memory - `maxmemory` set too low for the growing dataset - Memory leak in application—keys created but never deleted - Large values (e.g., serialized objects, binary data) consuming disproportionate memory - No TTL set on keys, preventing natural expiration

Step-by-Step Fix 1. **Confirm the OOM condition and current settings**: ```bash redis-cli INFO memory | grep -E "used_memory:|maxmemory:|maxmemory_policy" redis-cli CONFIG GET maxmemory redis-cli CONFIG GET maxmemory-policy ```

  1. 1.Emergency: Temporarily increase maxmemory if hardware allows:
  2. 2.```bash
  3. 3.redis-cli CONFIG SET maxmemory 8gb
  4. 4.# Verify the change takes effect
  5. 5.redis-cli INFO memory | grep maxmemory
  6. 6.`
  7. 7.Change eviction policy to allow automatic memory freeing:
  8. 8.```bash
  9. 9.# For cache workloads, use volatile-lru (evicts keys with TTL using LRU)
  10. 10.redis-cli CONFIG SET maxmemory-policy volatile-lru

# For pure cache where all keys can be evicted redis-cli CONFIG SET maxmemory-policy allkeys-lru ```

  1. 1.Identify and delete the largest keys:
  2. 2.```bash
  3. 3.# Use redis-cli --bigkeys to find memory consumers
  4. 4.redis-cli --bigkeys --memkeys

# For more detailed analysis, use redis-memory-analyzer # pip install rma # rma -h localhost -p 6379

# Manually delete identified large keys redis-cli DEL "large:unneeded:cache:*" ```

  1. 1.Set TTL on keys that are missing expiration:
  2. 2.```bash
  3. 3.# Find keys without TTL
  4. 4.redis-cli --scan | while read key; do
  5. 5.ttl=$(redis-cli TTL "$key")
  6. 6.if [ "$ttl" -eq -1 ]; then
  7. 7.echo "Key without TTL: $key"
  8. 8.fi
  9. 9.done

# Set a default TTL on keys matching a pattern redis-cli --scan --pattern "session:*" | xargs -I {} redis-cli EXPIRE {} 86400 ```

  1. 1.Use MEMORY DOCTOR for diagnostic output:
  2. 2.```bash
  3. 3.redis-cli MEMORY DOCTOR
  4. 4.# Example output:
  5. 5.# Sam is too young to suggest things for you...
  6. 6.# or:
  7. 7.# * High total memory usage detected. Your instance may benefit from a maxmemory policy.
  8. 8.`

Prevention - Never use `noeviction` in production unless the dataset size is strictly bounded - Set `maxmemory` to 70-75% of available RAM to leave room for forks and fragmentation - Always set TTL on cached keys—never rely on eviction alone - Use `volatile-ttl` or `volatile-lru` for cache, `noeviction` only for data stores - Monitor memory usage with alerting at 60%, 80%, and 90% of maxmemory - Run `redis-cli --memkeys` weekly to identify memory hotspots - Implement application-level cache size limits and key expiration policies