Introduction Memcached limits the number of simultaneous connections with the `-c` (maxconns) parameter, defaulting to 1024. When all connections are in use, new connections are immediately rejected, causing cache misses and increased database load. This is common in microservices architectures where many application instances connect to a shared Memcached cluster.

Symptoms - Memcached logs show `ERROR: Too many open connections` or connections silently dropped - Application reports `connection refused` or `failed to connect` to Memcached - `STAT curr_connections` equals the maxconns limit - `STAT rejected_connections` increasing in stats output - Cache hit rate dropping as connections fail and requests fall through to the database

Common Causes - `-c` (maxconns) set too low for the number of application instances - Connection pooling not configured, each request opens a new connection - Connection leaks in application code not closing Memcached connections - Many small microservices all connecting to the same Memcached instance - Default maxconns of 1024 insufficient for modern application scales

Step-by-Step Fix 1. **Check current connection usage": ```bash echo "stats" | nc localhost 11211 | grep -E "curr_connections|max_connections|rejected_connections|total_connections" # Example output: # STAT curr_connections 1024 # STAT rejected_connections 5432 # STAT total_connections 98765 ```

  1. 1.**Increase the maxconns limit":
  2. 2.```bash
  3. 3.# Restart Memcached with higher connection limit
  4. 4.memcached -m 4096 -c 10000 -p 11211 -d

# -c 10000: max 10000 simultaneous connections # Also increase the file descriptor limit ulimit -n 65536 ```

  1. 1.**Configure the service file for systemd":
  2. 2.```ini
  3. 3.# /etc/systemd/system/memcached.service.d/override.conf
  4. 4.[Service]
  5. 5.LimitNOFILE=65536
  6. 6.ExecStart=
  7. 7.ExecStart=/usr/bin/memcached -m 4096 -c 10000 -p 11211 -u memcached -l 0.0.0.0
  8. 8.`
  9. 9.**Implement connection pooling in the application":
  10. 10.```python
  11. 11.import pymemcache
  12. 12.from pymemcache.client.hash import HashClient

# Use a pooled client client = HashClient( ['memcached1:11211', 'memcached2:11211'], use_pooling=True, pool_size=10, # Max connections per server timeout=1 )

# Always use context manager or ensure close result = client.get('my_key') ```

  1. 1.**Set connection timeouts to release idle connections":
  2. 2.```python
  3. 3.client = pymemcache.client.base.Client(
  4. 4.('localhost', 11211),
  5. 5.connect_timeout=1,
  6. 6.timeout=1,
  7. 7.no_delay=True
  8. 8.)
  9. 9.`

Prevention - Set maxconns to at least 2x the expected peak concurrent connections - Use connection pooling in all application Memcached clients - Monitor `rejected_connections` with alerting on any increase - Set appropriate connection timeouts (1-2 seconds) to prevent idle connections - Distribute connections across multiple Memcached instances using consistent hashing - Monitor `curr_connections` vs `max_connections` ratio - Implement circuit breakers to stop connecting when Memcached is overwhelmed