Introduction Linkerd sidecar proxies that experience memory leaks eventually get OOMKilled, breaking the service mesh for affected pods. This causes service communication failures and cascading issues.

Symptoms - Linkerd proxy container OOMKilled - Pod events show proxy restarting frequently - Memory usage of proxy growing continuously - Service communication failing when proxy is down - Proxy logs showing memory allocation errors

Common Causes - Memory limit too low for proxy workload - Memory leak in specific Linkerd version - High traffic volume causing proxy memory pressure - Connection leak in proxy (connections not being closed) - Proxy not garbage collecting properly

Step-by-Step Fix 1. **Check proxy resource usage': ```bash linkerd top deployment/my-app -n <namespace> kubectl top pod -l app=my-app -n <namespace> ```

  1. 1.**Increase proxy memory limit':
  2. 2.```bash
  3. 3.linkerd inject --proxy-cpu-limit 500m --proxy-memory-limit 512Mi deployment/my-app | kubectl apply -f -
  4. 4.`
  5. 5.**Upgrade Linkerd version':
  6. 6.```bash
  7. 7.linkerd upgrade | kubectl apply --prune -l linkerd.io/control-plane-ns -f -
  8. 8.`

Prevention - Set appropriate resource limits for sidecar proxies - Monitor proxy memory usage over time - Keep Linkerd updated to latest stable version - Use Linkerd's built-in diagnostics - Test proxy resource usage under load