What's Actually Happening

Docker containers exit with code 137 when they are killed by SIGKILL. This typically means the container exceeded its memory limit and was terminated by the OOM (Out of Memory) killer, or was manually killed.

The Error You'll See

Container status:

```bash $ docker ps -a

CONTAINER ID IMAGE STATUS NAMES abc123 my-app Exited (137) 5 seconds ago my-container

$ docker inspect my-container --format '{{.State.ExitCode}}' 137

$ docker inspect my-container --format '{{.State.OOMKilled}}' true ```

Docker logs:

```bash $ docker logs my-container

(no output - container killed abruptly) ```

System log:

```bash $ dmesg | grep -i "oom"

[12345.678] my-container invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0 [12345.678] oom-killer: Kill process 12345 (my-app) score 1000 or sacrifice child [12345.678] Killed process 12345 (my-app) total-vm:500000kB, anon-rss:100000kB ```

Why This Happens

  1. 1.Memory limit exceeded - Container used more memory than allowed
  2. 2.OOM killer triggered - System killed container due to memory pressure
  3. 3.Manual kill - Container killed with docker kill or SIGKILL
  4. 4.Memory leak - Application leaking memory over time
  5. 5.Insufficient limit - Memory limit too low for workload
  6. 6.Host memory pressure - Host running out of memory

Step 1: Check Container Memory Status

```bash # Check if OOM killed the container docker inspect my-container --format '{{.State.OOMKilled}}' # true = killed by OOM, false = killed manually

# Check container memory limit docker inspect my-container --format '{{.HostConfig.Memory}}' # 0 = unlimited, positive number = limit in bytes

# Check container memory usage before exit docker stats my-container --no-stream # Shows memory usage (run before container exits)

# Check container exit code docker inspect my-container --format '{{.State.ExitCode}}' # 137 = SIGKILL (128 + 9)

# Check container status docker ps -a | grep my-container ```

Step 2: Check Host Memory Pressure

```bash # Check host memory usage free -h

# Check memory details cat /proc/meminfo | grep -E "MemTotal|MemFree|MemAvailable"

# Check OOM events in system log dmesg | grep -i "oom" | tail -20

# Or journalctl journalctl -k | grep -i "oom"

# Check Docker daemon memory settings cat /etc/docker/daemon.json | grep memory

# Check if host has swap swapon --show # No swap = more likely OOM

# Check cgroups memory limit cat /sys/fs/cgroup/memory/docker/<container-id>/memory.limit_in_bytes ```

Step 3: Increase Container Memory Limit

```bash # Check current memory limit docker inspect my-container --format '{{.HostConfig.Memory}}'

# Increase memory limit on run docker run --memory="2g" --memory-swap="2g" my-image

# Memory-swap = memory + swap (if swap enabled) # --memory-swap="2g" with --memory="1g" means 1GB RAM + 1GB swap

# For existing container, recreate with higher limit docker stop my-container docker rm my-container docker run --name my-container --memory="4g" my-image

# In docker-compose.yml: services: my-app: image: my-image deploy: resources: limits: memory: 4G reservations: memory: 2G

# In Kubernetes pod spec: resources: limits: memory: "4Gi" requests: memory: "2Gi" ```

Step 4: Disable OOM Killer for Container

```bash # Set OOM score to avoid being killed first docker run --oom-score-adj=-1000 my-image # -1000 = least likely to be killed # 1000 = most likely to be killed

# Or disable OOM killer entirely (dangerous) docker run --oom-kill-disable=true my-image # Container won't be killed, but host may crash instead

# Use with caution - host may become unstable

# In docker-compose: services: my-app: oom_score_adj: -500 ```

Step 5: Identify Memory Leak

```bash # Monitor container memory usage docker stats my-container

# Watch memory grow over time watch -n 5 'docker stats my-container --no-stream'

# If memory continuously increases, app has leak

# Check application memory profile # For Java apps: docker exec my-container jmap -histo 1

# For Node.js apps: docker exec my-container node --inspect

# For Go apps: docker exec my-container curl localhost:6060/debug/pprof/heap

# Use memory profiling tools # Java: VisualVM, YourKit # Node.js: heapdump, clinic # Python: memory_profiler

# Check process memory inside container docker exec my-container ps aux --sort=-%mem ```

Step 6: Fix Application Memory Issues

```java // Java - limit JVM heap docker run -e JAVA_OPTS="-Xmx2g -Xms1g" my-java-app

// In application: // - Use memory-efficient data structures // - Avoid large object retention // - Implement proper cleanup ```

```javascript // Node.js - set heap limit docker run -e NODE_OPTIONS="--max-old-space-size=2048" my-node-app

// In application: // - Clear caches periodically // - Use streaming instead of buffering // - Avoid memory leaks in event listeners ```

```python # Python - use efficient data handling # - Use generators instead of lists # - Clear large variables explicitly # - Use weak references for caches

# Set memory limit in code import resource resource.setrlimit(resource.RLIMIT_AS, (2*1024*1024*1024, 2*1024*1024*1024)) ```

Step 7: Configure Swap Space

```bash # Add swap to host (reduces OOM kills) # Create swap file sudo fallocate -l 4G /swapfile sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile

# Persistent swap echo '/swapfile none swap sw 0 0' >> /etc/fstab

# Check swap enabled swapon --show

# Docker memory-swap includes swap # --memory="2g" --memory-swap="4g" # Allows 2GB RAM + 2GB swap

# Disable swap (for performance) docker run --memory="2g" --memory-swap="2g" my-image # memory-swap == memory means no swap ```

Step 8: Use Health Checks

```yaml # In Dockerfile or docker-compose HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \ CMD curl -f http://localhost:8080/health || exit 1

# In docker-compose.yml: services: my-app: healthcheck: test: ["CMD", "curl", "-f", "http://localhost:8080/health"] interval: 30s timeout: 10s retries: 3 start_period: 5s

# Health check endpoint monitors memory # Return unhealthy if memory too high # Docker can restart unhealthy containers

# Health endpoint example: @app.route('/health') def health(): import psutil memory_percent = psutil.virtual_memory().percent if memory_percent > 90: return 'unhealthy', 503 return 'healthy', 200 ```

Step 9: Monitor Container Memory

```bash # Create monitoring script cat << 'EOF' > /usr/local/bin/check_docker_memory.sh #!/bin/bash for container in $(docker ps --format '{{.Names}}'); do MEM=$(docker stats $container --no-stream --format '{{.MemUsage}}') LIMIT=$(docker inspect $container --format '{{.HostConfig.Memory}}')

if [ "$LIMIT" != "0" ]; then PERCENT=$(docker stats $container --no-stream --format '{{.MemPerc}}' | tr -d '%') if [ "$PERCENT" -gt 90 ]; then echo "ALERT: $container memory at $MEM ($PERCENT%)" fi fi done EOF

chmod +x /usr/local/bin/check_docker_memory.sh

# Add to cron echo "*/5 * * * * root /usr/local/bin/check_docker_memory.sh" > /etc/cron.d/docker-memory

# Use Prometheus + Grafana for monitoring # cAdvisor collects Docker metrics # Alert on high memory usage ```

Step 10: Handle OOM Gracefully

```bash # Use restart policy to handle OOM docker run --restart=on-failure:5 my-image # Restart on failure, max 5 attempts

# Or always restart docker run --restart=always my-image

# In docker-compose: services: my-app: restart: on-failure restart_policy: max_attempts: 5

# Implement application-level OOM handling # Check memory periodically # Free resources before hitting limit

# Use external process manager # systemd, supervisor, or Kubernetes # Kubernetes restarts pods on OOM ```

Exit Code Reference

Exit CodeSignalMeaning
137SIGKILLKilled (OOM or manual)
143SIGTERMGraceful termination
139SIGSEGVSegmentation fault
1SIGHUPHangup/Reload
0-Normal exit

Verify the Fix

```bash # After increasing memory limit or fixing leak

# 1. Run container with new limits docker run --name my-container --memory="4g" my-image

# 2. Monitor memory usage docker stats my-container --no-stream # Should stay below limit

# 3. Check container running docker ps # Should show container running

# 4. Verify no OOM kills docker inspect my-container --format '{{.State.OOMKilled}}' # Should be false (or container still running)

# 5. Check host memory free -h # Should have available memory

# 6. Check logs for memory errors docker logs my-container # No OOM-related errors

# 7. Monitor over time docker stats my-container -t 10 # Memory usage stable ```

  • [Fix Docker Container Exit Code 1](/articles/fix-docker-container-exit-code-1)
  • [Fix Docker Memory Limit Exceeded](/articles/fix-docker-memory-limit-exceeded)
  • [Fix Docker Container Crash Loop](/articles/fix-docker-container-crash-loop)