Introduction
Docker exit code 137 (128 + 9 SIGKILL) indicates a container was killed by the OOM (Out of Memory) killer. This happens when a container exceeds its memory limit or the host runs out of memory, causing the Linux kernel to terminate the process.
Symptoms
- Container stops with exit code 137
docker ps -ashows container status asExited (137)- Application logs show abrupt termination without shutdown sequence
- Kernel logs show
Memory cgroup out of memory - Container was running memory-intensive operations
docker inspectshows"OOMKilled": true
Common Causes
- Container memory limit set too low for application requirements
- Memory leak in application code
- No swap space configured for bursty workloads
- Java JVM heap size exceeding container limits
- Multiple containers competing for host memory
- Large file uploads or data processing in memory
Step-by-Step Fix
- 1.Check if container was OOM killed:
- 2.```bash
- 3.docker inspect <container_id> --format='{{.State.OOMKilled}}'
- 4.# Returns: true
- 5.
` - 6.View container memory statistics:
- 7.```bash
- 8.docker stats --no-stream <container_id>
- 9.# Check MEM USAGE / LIMIT column
- 10.
` - 11.Increase container memory limit:
- 12.```bash
- 13.docker run -m 2g --memory-swap 2g myimage
- 14.
`
Or in docker-compose.yml:
``yaml
services:
app:
image: myapp
deploy:
resources:
limits:
memory: 2G
reservations:
memory: 512M
- 1.Configure Java applications properly (if using Java):
- 2.```bash
- 3.# Use container-aware JVM options
- 4.docker run -e JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0" myjavaapp
- 5.
` - 6.Add swap space for bursty workloads:
- 7.```bash
- 8.# Allow container to use swap (not recommended for production databases)
- 9.docker run -m 1g --memory-swap 2g myimage
- 10.
` - 11.Monitor memory usage patterns:
- 12.```bash
- 13.# Install cadvisor for monitoring
- 14.docker run -d --name=cadvisor \
- 15.-v /:/rootfs:ro \
- 16.-v /var/run:/var/run:ro \
- 17.-v /sys:/sys:ro \
- 18.-v /var/lib/docker/:/var/lib/docker:ro \
- 19.-p 8080:8080 \
- 20.gcr.io/cadvisor/cadvisor:latest
- 21.
` - 22.Check for memory leaks:
- 23.```bash
- 24.# If using Node.js
- 25.docker exec <container> node --inspect
# Profile heap usage # Or add to application: node --max-old-space-size=4096 ```
- 1.Set up OOM notifications:
- 2.```bash
- 3.# Monitor OOM events
- 4.dmesg -T | grep -i 'killed process'
- 5.journalctl -k | grep -i 'memory cgroup'
- 6.
` - 7.Implement graceful degradation in application:
- 8.```python
- 9.import resource
- 10.import sys
def check_memory(): soft, hard = resource.getrlimit(resource.RLIMIT_AS) usage = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss if usage > soft * 0.9: # Free caches, reduce workers pass ```
- 1.Verify fix by running load tests:
- 2.```bash
- 3.# Stress test the container
- 4.docker run --rm -m 512m --memory-swap 512m myimage stress --vm 1 --vm-bytes 400M --timeout 60s
- 5.
`
Prevention
- Always set memory limits based on load testing
- Monitor memory usage trends with Prometheus/Grafana
- Use horizontal pod autoscaling in Kubernetes
- Configure appropriate JVM/Python/Node.js memory settings
- Set up alerts for memory usage >80%
- Regular memory profiling during development
- Use memory-efficient algorithms for large data processing