Introduction Redis maintains output buffers for each client to queue responses. When a slow client cannot consume data fast enough—common with large `KEYS`, `SMEMBERS`, or pub/sub subscriptions—the output buffer grows until it exceeds `client-output-buffer-limit`, at which point Redis forcibly disconnects the client.
Symptoms - Clients are disconnected with `Client closed connection` or `ERR max number of clients reached` - Redis logs show `Client x.x.x.x:port closed after exceeding output buffer limit` - Pub/sub subscribers drop messages silently and never recover - `INFO clients` shows `client_recent_max_output_buffer` growing continuously - Monitoring shows periodic connection churn correlating with large response queries
Common Causes - Application calling `SMEMBERS` on a set with millions of elements - Pub/sub subscriber not processing messages fast enough from a high-throughput channel - `MONITOR` command running in production generating massive output - Replica falling behind primary, causing replication output buffer to overflow - `client-output-buffer-limit` thresholds set too low for the workload
Step-by-Step Fix 1. **Check current buffer limit configuration**: ```bash redis-cli CONFIG GET client-output-buffer-limit # Returns: # 1) "client-output-buffer-limit" # 2) "normal 0 0 0 slave 268435456 67108864 60 pubsub 33554432 8388608 60" ```
- 1.Identify clients with large output buffers:
- 2.```bash
- 3.redis-cli CLIENT LIST | awk -F' ' '{for(i=1;i<=NF;i++) if($i ~ /^obl=/) print $0}' | sort -t= -k2 -rn | head -10
- 4.
` - 5.Increase output buffer limits for specific client types:
- 6.```bash
- 7.# normal <hard_limit> <soft_limit> <soft_seconds>
- 8.# Increase normal client buffer to 256MB hard, 64MB soft for 60s
- 9.redis-cli CONFIG SET client-output-buffer-limit "normal 268435456 67108864 60 slave 268435456 67108864 60 pubsub 67108864 16777216 60"
- 10.
` - 11.Replace SMEMBERS with SSCAN for large sets:
- 12.```python
- 13.# BAD: loads entire set into memory
- 14.# members = redis.smembers('large_set')
# GOOD: iterates in batches def scan_set_members(redis_client, key, batch_size=100): cursor = 0 while True: cursor, members = redis_client.sscan(key, cursor=cursor, count=batch_size) for member in members: yield member if cursor == 0: break ```
- 1.Configure pub/sub with buffer-aware subscription handling:
- 2.```python
- 3.import redis
def handle_pubsub_safe(redis_client, channel): pubsub = redis_client.pubsub() pubsub.subscribe(channel) for message in pubsub.listen(): if message['type'] == 'message': process(message['data']) # Check buffer usage periodically info = redis_client.info('clients') if info.get('blocked_clients', 0) > 0: print("Warning: blocked clients detected") ```
- 1.Kill clients that are consuming too much buffer:
- 2.```bash
- 3.redis-cli CLIENT LIST | grep -E "obl=[1-9]" | awk '{print $2}' | cut -d= -f2
- 4.# Kill the problematic client by ID
- 5.redis-cli CLIENT KILL <client_id>
- 6.
`