# Docker Container OOM Killed: Diagnose and Fix Out of Memory Errors
Your container dies unexpectedly. You check the logs and see nothing—or just a sudden stop. The exit code is 137. This is an OOM kill—the Linux kernel terminated your container because it ran out of memory.
Exit code 137 = 128 + 9 (SIGKILL signal). The kernel's OOM killer targeted your container process.
Identifying OOM Kills
Check Exit Code
docker ps -a --filter "status=exited" --format "table {{.Names}}\t{{.Status}}"Look for "Exited (137)" status.
Check Container Events
docker events --filter "container=<container_name>" --since 1hLook for "oom" in the output:
2026-04-03T10:30:00.000000000Z container oom <container_id> ...Check Docker Inspect
```bash docker inspect <container_name> --format '{{.State.OOMKilled}}' # true means OOM killed
docker inspect <container_name> --format '{{.State.ExitCode}}' # 137 = OOM kill ```
Check Kernel Logs
```bash # Check dmesg for OOM events dmesg | grep -i "out of memory" dmesg | grep -i "oom"
# Or check syslog grep -i "oom" /var/log/syslog journalctl -k | grep -i oom ```
You'll see messages like:
Out of memory: Kill process 12345 (python) score 500 or sacrifice child
Killed process 12345 (python) total-vm:2048000kB, anon-rss:1024000kB, file-rss:0kB
Memory cgroup out of memory: Kill process 12345Diagnosing Memory Usage
Check Container Memory Stats
docker stats <container_name> --no-streamOutput:
``
CONTAINER CPU % MEM USAGE / LIMIT MEM %
myapp 50.0% 1.5GiB / 2GiB 75.0%
If MEM USAGE is near the LIMIT, you've found the problem.
Check Detailed Memory Info
docker exec <container> cat /sys/fs/cgroup/memory/memory.usage_in_bytes
docker exec <container> cat /sys/fs/cgroup/memory/memory.limit_in_bytes
docker exec <container> cat /sys/fs/cgroup/memory/memory.statCheck Memory by Process Inside Container
docker exec <container> ps aux --sort=-%mem
docker exec <container> top -o %MEMMonitor in Real-Time
```bash # Watch memory usage watch -n 1 'docker stats --no-stream'
# Continuous stats docker stats <container> ```
Understanding Memory Limits
How Docker Memory Limits Work
When you set a memory limit:
docker run --memory="512m" <image>- 1.Docker configures the cgroup memory limit. When the container exceeds this limit:
- 2.The kernel tries to reclaim memory (page cache, etc.)
- 3.If still over limit, processes are throttled
- 4.If memory can't be freed, OOM killer triggers
Memory Limit vs Memory Reservation
--memory: Hard limit. Container is killed if it exceeds this.--memory-reservation: Soft limit. Container can burst above this but gets throttled under memory pressure.
docker run --memory="1g" --memory-reservation="512m" <image>Swap Limits
By default, containers can use swap:
docker run --memory="1g" --memory-swap="2g" <image>
# 2g swap = 1g memory + 1g swapDisable swap:
docker run --memory="1g" --memory-swap="1g" <image>
# Same value disables swapOOM Score Adjust
Control OOM killer priority:
docker run --oom-score-adj=500 <image> # More likely to be killed (default 0)
docker run --oom-score-adj=-500 <image> # Less likely to be killed
docker run --oom-score-adj=-1000 <image> # Never OOM killedFixing OOM Issues
Fix 1: Increase Memory Limit
The simplest fix—give the container more memory.
docker run --memory="2g" --memory-swap="4g" <image>In Docker Compose:
services:
myapp:
image: myimage:latest
deploy:
resources:
limits:
memory: 2G
reservations:
memory: 1GFix 2: Disable OOM Kill (Not Recommended)
Let the container use unlimited memory:
docker run --memory="unlimited" <image>
# Or simply don't set --memoryWarning: This can cause the host to run out of memory and become unstable.
Fix 3: Optimize Application Memory
Profile your application's memory usage:
For Java applications:
``bash
docker run --memory="2g" \
-e JAVA_OPTS="-Xms512m -Xmx1536m -XX:+UseContainerSupport" \
openjdk:17 java -jar app.jar
For Node.js applications:
``bash
docker run --memory="1g" \
-e NODE_OPTIONS="--max-old-space-size=768" \
node:18 node app.js
For Python applications:
``python
# Limit memory-intensive operations
# Use generators instead of lists
# Clear large objects when done
Fix 4: Fix Memory Leaks
If memory usage grows continuously, you have a leak:
# Monitor memory over time
while true; do
docker stats --no-stream <container>
sleep 60
doneCommon causes: - Unclosed database connections - Caching without expiration - Event listener accumulation - Large object retention
Fix 5: Configure Swap Properly
Allow the container to swap less critical memory:
docker run --memory="1g" --memory-swap="2g" <image>But remember: swap is slower than RAM. If your app is constantly swapping, increase memory instead.
Fix 6: Use Memory Reservation
Allow bursting with soft limits:
services:
myapp:
image: myimage:latest
deploy:
resources:
limits:
memory: 2G
reservations:
memory: 512MThe container can use up to 2G but is guaranteed 512M.
Fix 7: Multiple Smaller Containers
Instead of one large container:
docker run --memory="8g" big-appUse multiple smaller containers:
docker-compose up --scale worker=4Each with smaller memory limits:
services:
worker:
image: myimage:latest
deploy:
resources:
limits:
memory: 2G
scale: 4Memory-Specific Optimizations
For Java Applications
Java 10+ has container awareness built-in. For older versions:
```bash # Use container-aware flags -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
# Explicit heap size (leave room for non-heap) -XX:MaxRAMFraction=2 # Use 1/2 of container memory for heap ```
For Node.js Applications
# Set heap size based on container memory
NODE_OPTIONS="--max-old-space-size=1536" # For 2GB containerFor Python Applications
```python # Use memory-efficient data structures import array arr = array.array('i', [1, 2, 3]) # More memory efficient than list
# Use generators def process_large_file(): with open('large.txt') as f: for line in f: yield process(line) ```
For Go Applications
// Set GOMEMLIMIT (Go 1.19+)
import "runtime/debug"
debug.SetMemoryLimit(1024 * 1024 * 1024) // 1GBPrevention Strategies
Set Appropriate Limits
Base limits on actual usage:
# Monitor peak usage over time
docker stats <container> --no-stream
# Set limit 20-30% above peakHealth Checks with Memory
services:
myapp:
image: myimage:latest
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
deploy:
resources:
limits:
memory: 2GRestart Policy
Configure automatic restart after OOM:
docker run --restart on-failure:5 --memory="2g" <image>Monitor and Alert
Set up monitoring:
# Simple memory alert script
#!/bin/bash
THRESHOLD=90
CONTAINER=$1
MEM_PERCENT=$(docker stats --no-stream --format "{{.MemPerc}}" $CONTAINER | tr -d '%')
if (( $(echo "$MEM_PERCENT > $THRESHOLD" | bc -l) )); then
echo "WARNING: $CONTAINER memory at ${MEM_PERCENT}%"
fiVerification Steps
After applying fixes:
- 1.Monitor memory usage:
- 2.```bash
- 3.docker stats <container>
- 4.
` - 5.Check OOM status:
- 6.```bash
- 7.docker inspect <container> --format '{{.State.OOMKilled}}'
- 8.
` - 9.Stress test:
- 10.```bash
- 11.# Run memory-intensive operation
- 12.docker exec <container> stress --vm 1 --vm-bytes 1G
- 13.
` - 14.Check container stays up:
- 15.```bash
- 16.docker ps --filter "name=<container>"
- 17.
`
OOM kills are the kernel's last resort to protect the system. If your container is being killed, it's using more memory than you allocated. Either increase the limit, fix memory leaks, or optimize your application's memory usage.