Introduction Memcached has a maximum item size limit of 1MB by default (configurable with `-I`). When an item exceeds the largest slab class size, it cannot be stored and the SET operation fails. This commonly occurs when caching serialized objects, rendered pages, or query result sets that grow over time.

Symptoms - `SERVER_ERROR object too large for slab` in Memcached logs - SET operations returning `SERVER_ERROR` or silently failing - Application cache stores returning None for keys that should exist - `STAT bytes_written` not increasing despite SET operations - Error messages: `FAILED TO STORE: item too large`

Common Causes - Application storing serialized objects that have grown beyond 1MB - No size check before attempting to cache large query results - Default `-I` (max item size) of 1MB insufficient for the workload - Rendering entire HTML pages for caching without size limits - Compressed data still exceeding the slab limit

Step-by-Step Fix 1. **Check the current maximum item size": ```bash echo "stats settings" | nc localhost 11211 | grep max_item_size # Default: STAT max_item_size 1048576 (1MB) ```

  1. 1.**Increase the maximum item size":
  2. 2.```bash
  3. 3.# Restart Memcached with larger item size limit
  4. 4.memcached -m 4096 -I 5m -p 11211 -d

# -I 5m: Maximum item size of 5MB # Note: This creates additional slab classes up to the new limit ```

  1. 1.**Implement size checking in the application":
  2. 2.```python
  3. 3.import sys

MAX_CACHE_SIZE = 5 * 1024 * 1024 # 5MB

def safe_set(client, key, value, expire=300): serialized = serialize(value) size = sys.getsizeof(serialized)

if size > MAX_CACHE_SIZE: print(f"Item {key} too large: {size} bytes, skipping cache") return False

return client.set(key, serialized, expire=expire) ```

  1. 1.**Chunk large items across multiple keys":
  2. 2.```python
  3. 3.def set_large_item(client, key, data, chunk_size=512*1024):
  4. 4."""Store data larger than max slab size by chunking"""
  5. 5.chunks = [data[i:i+chunk_size] for i in range(0, len(data), chunk_size)]

# Store metadata client.set(f"{key}:meta", str(len(chunks)).encode(), expire=3600)

# Store each chunk for i, chunk in enumerate(chunks): client.set(f"{key}:chunk:{i}", chunk, expire=3600)

def get_large_item(client, key): """Retrieve a chunked large item""" meta = client.get(f"{key}:meta") if not meta: return None

num_chunks = int(meta.decode()) chunks = [] for i in range(num_chunks): chunk = client.get(f"{key}:chunk:{i}") if chunk is None: return None # Incomplete, treat as miss chunks.append(chunk)

return b''.join(chunks) ```

Prevention - Set `-I` appropriately for your maximum expected item size (max 128MB) - Implement application-level size checks before caching - Use compression (gzip, zlib) to reduce item sizes - Chunk large items across multiple keys when exceeding slab limits - Monitor item size distribution with `stats sizes` - Consider using a different storage (Redis, S3) for very large cached objects - Set per-key size limits based on the expected data pattern