# Docker Container OOM Killed: Diagnose and Fix Out of Memory Errors

Your container dies unexpectedly. You check the logs and see nothing—or just a sudden stop. The exit code is 137. This is an OOM kill—the Linux kernel terminated your container because it ran out of memory.

Exit code 137 = 128 + 9 (SIGKILL signal). The kernel's OOM killer targeted your container process.

Identifying OOM Kills

Check Exit Code

bash
docker ps -a --filter "status=exited" --format "table {{.Names}}\t{{.Status}}"

Look for "Exited (137)" status.

Check Container Events

bash
docker events --filter "container=<container_name>" --since 1h

Look for "oom" in the output:

bash
2026-04-03T10:30:00.000000000Z container oom <container_id> ...

Check Docker Inspect

```bash docker inspect <container_name> --format '{{.State.OOMKilled}}' # true means OOM killed

docker inspect <container_name> --format '{{.State.ExitCode}}' # 137 = OOM kill ```

Check Kernel Logs

```bash # Check dmesg for OOM events dmesg | grep -i "out of memory" dmesg | grep -i "oom"

# Or check syslog grep -i "oom" /var/log/syslog journalctl -k | grep -i oom ```

You'll see messages like:

bash
Out of memory: Kill process 12345 (python) score 500 or sacrifice child
Killed process 12345 (python) total-vm:2048000kB, anon-rss:1024000kB, file-rss:0kB
Memory cgroup out of memory: Kill process 12345

Diagnosing Memory Usage

Check Container Memory Stats

bash
docker stats <container_name> --no-stream

Output: `` CONTAINER CPU % MEM USAGE / LIMIT MEM % myapp 50.0% 1.5GiB / 2GiB 75.0%

If MEM USAGE is near the LIMIT, you've found the problem.

Check Detailed Memory Info

bash
docker exec <container> cat /sys/fs/cgroup/memory/memory.usage_in_bytes
docker exec <container> cat /sys/fs/cgroup/memory/memory.limit_in_bytes
docker exec <container> cat /sys/fs/cgroup/memory/memory.stat

Check Memory by Process Inside Container

bash
docker exec <container> ps aux --sort=-%mem
docker exec <container> top -o %MEM

Monitor in Real-Time

```bash # Watch memory usage watch -n 1 'docker stats --no-stream'

# Continuous stats docker stats <container> ```

Understanding Memory Limits

How Docker Memory Limits Work

When you set a memory limit:

bash
docker run --memory="512m" <image>
  1. 1.Docker configures the cgroup memory limit. When the container exceeds this limit:
  2. 2.The kernel tries to reclaim memory (page cache, etc.)
  3. 3.If still over limit, processes are throttled
  4. 4.If memory can't be freed, OOM killer triggers

Memory Limit vs Memory Reservation

  • --memory: Hard limit. Container is killed if it exceeds this.
  • --memory-reservation: Soft limit. Container can burst above this but gets throttled under memory pressure.
bash
docker run --memory="1g" --memory-reservation="512m" <image>

Swap Limits

By default, containers can use swap:

bash
docker run --memory="1g" --memory-swap="2g" <image>
# 2g swap = 1g memory + 1g swap

Disable swap:

bash
docker run --memory="1g" --memory-swap="1g" <image>
# Same value disables swap

OOM Score Adjust

Control OOM killer priority:

bash
docker run --oom-score-adj=500 <image>  # More likely to be killed (default 0)
docker run --oom-score-adj=-500 <image>  # Less likely to be killed
docker run --oom-score-adj=-1000 <image> # Never OOM killed

Fixing OOM Issues

Fix 1: Increase Memory Limit

The simplest fix—give the container more memory.

bash
docker run --memory="2g" --memory-swap="4g" <image>

In Docker Compose:

yaml
services:
  myapp:
    image: myimage:latest
    deploy:
      resources:
        limits:
          memory: 2G
        reservations:
          memory: 1G

Fix 2: Disable OOM Kill (Not Recommended)

Let the container use unlimited memory:

bash
docker run --memory="unlimited" <image>
# Or simply don't set --memory

Warning: This can cause the host to run out of memory and become unstable.

Fix 3: Optimize Application Memory

Profile your application's memory usage:

For Java applications: ``bash docker run --memory="2g" \ -e JAVA_OPTS="-Xms512m -Xmx1536m -XX:+UseContainerSupport" \ openjdk:17 java -jar app.jar

For Node.js applications: ``bash docker run --memory="1g" \ -e NODE_OPTIONS="--max-old-space-size=768" \ node:18 node app.js

For Python applications: ``python # Limit memory-intensive operations # Use generators instead of lists # Clear large objects when done

Fix 4: Fix Memory Leaks

If memory usage grows continuously, you have a leak:

bash
# Monitor memory over time
while true; do
  docker stats --no-stream <container>
  sleep 60
done

Common causes: - Unclosed database connections - Caching without expiration - Event listener accumulation - Large object retention

Fix 5: Configure Swap Properly

Allow the container to swap less critical memory:

bash
docker run --memory="1g" --memory-swap="2g" <image>

But remember: swap is slower than RAM. If your app is constantly swapping, increase memory instead.

Fix 6: Use Memory Reservation

Allow bursting with soft limits:

yaml
services:
  myapp:
    image: myimage:latest
    deploy:
      resources:
        limits:
          memory: 2G
        reservations:
          memory: 512M

The container can use up to 2G but is guaranteed 512M.

Fix 7: Multiple Smaller Containers

Instead of one large container:

bash
docker run --memory="8g" big-app

Use multiple smaller containers:

bash
docker-compose up --scale worker=4

Each with smaller memory limits:

yaml
services:
  worker:
    image: myimage:latest
    deploy:
      resources:
        limits:
          memory: 2G
    scale: 4

Memory-Specific Optimizations

For Java Applications

Java 10+ has container awareness built-in. For older versions:

```bash # Use container-aware flags -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap

# Explicit heap size (leave room for non-heap) -XX:MaxRAMFraction=2 # Use 1/2 of container memory for heap ```

For Node.js Applications

bash
# Set heap size based on container memory
NODE_OPTIONS="--max-old-space-size=1536"  # For 2GB container

For Python Applications

```python # Use memory-efficient data structures import array arr = array.array('i', [1, 2, 3]) # More memory efficient than list

# Use generators def process_large_file(): with open('large.txt') as f: for line in f: yield process(line) ```

For Go Applications

go
// Set GOMEMLIMIT (Go 1.19+)
import "runtime/debug"
debug.SetMemoryLimit(1024 * 1024 * 1024) // 1GB

Prevention Strategies

Set Appropriate Limits

Base limits on actual usage:

bash
# Monitor peak usage over time
docker stats <container> --no-stream
# Set limit 20-30% above peak

Health Checks with Memory

yaml
services:
  myapp:
    image: myimage:latest
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
      interval: 30s
      timeout: 10s
      retries: 3
    deploy:
      resources:
        limits:
          memory: 2G

Restart Policy

Configure automatic restart after OOM:

bash
docker run --restart on-failure:5 --memory="2g" <image>

Monitor and Alert

Set up monitoring:

bash
# Simple memory alert script
#!/bin/bash
THRESHOLD=90
CONTAINER=$1
MEM_PERCENT=$(docker stats --no-stream --format "{{.MemPerc}}" $CONTAINER | tr -d '%')
if (( $(echo "$MEM_PERCENT > $THRESHOLD" | bc -l) )); then
  echo "WARNING: $CONTAINER memory at ${MEM_PERCENT}%"
fi

Verification Steps

After applying fixes:

  1. 1.Monitor memory usage:
  2. 2.```bash
  3. 3.docker stats <container>
  4. 4.`
  5. 5.Check OOM status:
  6. 6.```bash
  7. 7.docker inspect <container> --format '{{.State.OOMKilled}}'
  8. 8.`
  9. 9.Stress test:
  10. 10.```bash
  11. 11.# Run memory-intensive operation
  12. 12.docker exec <container> stress --vm 1 --vm-bytes 1G
  13. 13.`
  14. 14.Check container stays up:
  15. 15.```bash
  16. 16.docker ps --filter "name=<container>"
  17. 17.`

OOM kills are the kernel's last resort to protect the system. If your container is being killed, it's using more memory than you allocated. Either increase the limit, fix memory leaks, or optimize your application's memory usage.