Introduction
Linux cgroups (control groups) enforce resource limits on processes and containers. When a container hits its cgroup memory limit, the kernel begins reclaiming memory, may invoke the OOM killer for the container's processes, and can cause severe application throttling. This is common in Kubernetes environments where resources.limits.memory is set too aggressively.
Symptoms
- Container OOM killed with
dmesgshowingMemory cgroup out of memory - Application slows down significantly under load
kubectl describe podshowsOOMKilledwithreason: OOMKilled- Container restart loop with exit code 137
memory.maxlimit reached in cgroup v2 ormemory.limit_in_bytesin v1
Common Causes
- Memory limit set too low for the application's working set
- Memory leak in application slowly consuming allocated limit
- Java JVM heap not configured to respect container limits (pre-Java 10)
- Node.js V8 heap growing beyond cgroup limit without
--max-old-space-size - Kubernetes limit requests mismatch causing scheduling on undersized nodes
Step-by-Step Fix
- 1.Check current memory usage and limits:
- 2.```bash
- 3.# cgroup v2
- 4.cat /sys/fs/cgroup/system.slice/docker-*.scope/memory.current
- 5.cat /sys/fs/cgroup/system.slice/docker-*.scope/memory.max
# cgroup v1 cat /sys/fs/cgroup/memory/docker/*/memory.usage_in_bytes cat /sys/fs/cgroup/memory/docker/*/memory.limit_in_bytes ```
- 1.Check for OOM events in container:
- 2.```bash
- 3.dmesg -T | grep -i "oom"
- 4.journalctl -k | grep "memory cgroup"
- 5.
` - 6.For Docker, increase container memory limit:
- 7.```bash
- 8.docker update --memory 4g --memory-swap 4g <container-id>
- 9.
` - 10.For Kubernetes, adjust resource limits:
- 11.```yaml
- 12.resources:
- 13.requests:
- 14.memory: "1Gi"
- 15.limits:
- 16.memory: "2Gi"
- 17.
` - 18.Apply:
kubectl apply -f deployment.yaml - 19.Configure application to respect container limits:
- 20.```bash
- 21.# Java 8/9 (before container awareness)
- 22.export JAVA_OPTS="-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMPercentage=75.0"
# Node.js export NODE_OPTIONS="--max-old-space-size=1536" ```
- 1.Enable memory monitoring for early detection:
- 2.```bash
- 3.# Monitor memory pressure events
- 4.cat /sys/fs/cgroup/memory/docker/*/memory.pressure
- 5.# Or use cgroup v2 PSI
- 6.cat /sys/fs/cgroup/system.slice/docker-*.scope/memory.pressure
- 7.
`
Prevention
- Set memory limits based on actual application profiling, not estimates
- Use
requestsandlimitsin Kubernetes with a 20-30% buffer - Enable Java container awareness:
-XX:+UseContainerSupport(Java 10+) - Implement memory usage alerts at 70% and 85% of the cgroup limit
- Use
MemoryHigh=in systemd as a soft limit before hardMemoryMax=