Introduction

Docker exit code 137 (128 + 9 SIGKILL) indicates a container was killed by the OOM (Out of Memory) killer. This happens when a container exceeds its memory limit or the host runs out of memory, causing the Linux kernel to terminate the process.

Symptoms

  • Container stops with exit code 137
  • docker ps -a shows container status as Exited (137)
  • Application logs show abrupt termination without shutdown sequence
  • Kernel logs show Memory cgroup out of memory
  • Container was running memory-intensive operations
  • docker inspect shows "OOMKilled": true

Common Causes

  • Container memory limit set too low for application requirements
  • Memory leak in application code
  • No swap space configured for bursty workloads
  • Java JVM heap size exceeding container limits
  • Multiple containers competing for host memory
  • Large file uploads or data processing in memory

Step-by-Step Fix

  1. 1.Check if container was OOM killed:
  2. 2.```bash
  3. 3.docker inspect <container_id> --format='{{.State.OOMKilled}}'
  4. 4.# Returns: true
  5. 5.`
  6. 6.View container memory statistics:
  7. 7.```bash
  8. 8.docker stats --no-stream <container_id>
  9. 9.# Check MEM USAGE / LIMIT column
  10. 10.`
  11. 11.Increase container memory limit:
  12. 12.```bash
  13. 13.docker run -m 2g --memory-swap 2g myimage
  14. 14.`

Or in docker-compose.yml: ``yaml services: app: image: myapp deploy: resources: limits: memory: 2G reservations: memory: 512M

  1. 1.Configure Java applications properly (if using Java):
  2. 2.```bash
  3. 3.# Use container-aware JVM options
  4. 4.docker run -e JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0" myjavaapp
  5. 5.`
  6. 6.Add swap space for bursty workloads:
  7. 7.```bash
  8. 8.# Allow container to use swap (not recommended for production databases)
  9. 9.docker run -m 1g --memory-swap 2g myimage
  10. 10.`
  11. 11.Monitor memory usage patterns:
  12. 12.```bash
  13. 13.# Install cadvisor for monitoring
  14. 14.docker run -d --name=cadvisor \
  15. 15.-v /:/rootfs:ro \
  16. 16.-v /var/run:/var/run:ro \
  17. 17.-v /sys:/sys:ro \
  18. 18.-v /var/lib/docker/:/var/lib/docker:ro \
  19. 19.-p 8080:8080 \
  20. 20.gcr.io/cadvisor/cadvisor:latest
  21. 21.`
  22. 22.Check for memory leaks:
  23. 23.```bash
  24. 24.# If using Node.js
  25. 25.docker exec <container> node --inspect

# Profile heap usage # Or add to application: node --max-old-space-size=4096 ```

  1. 1.Set up OOM notifications:
  2. 2.```bash
  3. 3.# Monitor OOM events
  4. 4.dmesg -T | grep -i 'killed process'
  5. 5.journalctl -k | grep -i 'memory cgroup'
  6. 6.`
  7. 7.Implement graceful degradation in application:
  8. 8.```python
  9. 9.import resource
  10. 10.import sys

def check_memory(): soft, hard = resource.getrlimit(resource.RLIMIT_AS) usage = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss if usage > soft * 0.9: # Free caches, reduce workers pass ```

  1. 1.Verify fix by running load tests:
  2. 2.```bash
  3. 3.# Stress test the container
  4. 4.docker run --rm -m 512m --memory-swap 512m myimage stress --vm 1 --vm-bytes 400M --timeout 60s
  5. 5.`

Prevention

  • Always set memory limits based on load testing
  • Monitor memory usage trends with Prometheus/Grafana
  • Use horizontal pod autoscaling in Kubernetes
  • Configure appropriate JVM/Python/Node.js memory settings
  • Set up alerts for memory usage >80%
  • Regular memory profiling during development
  • Use memory-efficient algorithms for large data processing