Introduction
Docker exit code 137 means the container process was killed by signal 9 (SIGKILL), which typically indicates an Out Of Memory (OOM) kill. The exit code is calculated as 128 + 9 = 137. This happens when the container exceeds its memory limit and the Linux OOM killer terminates the process to protect the host system.
Unlike application crashes (exit code 1) or graceful shutdowns (exit code 0), exit code 137 is involuntary—the process is killed by the kernel, not by the application itself. The fix requires understanding memory consumption patterns and adjusting limits or optimizing the application.
Symptoms
docker psshows container withSTATUS: Exited (137)- Container starts successfully but crashes after seconds, minutes, or hours of running
docker inspectshows"OOMKilled": truein State section- Application logs may show sudden termination without error messages
- Memory usage graphs show container approaching or hitting limits before crash
- Other containers on the same host may also experience memory pressure
Common Causes
- **Memory limit too low**: Container memory limit is below application requirements
- **Memory leak**: Application gradually consumes more memory until limit is hit
- **Traffic spike**: Sudden increase in requests causes memory usage to spike
- **Large data processing**: Processing big files, datasets, or images in memory
- **JVM heap misconfiguration**: Java applications not tuned for container memory limits
- **Node.js memory limit**: V8 heap not configured for container constraints
- **Database query results**: Loading too many rows into memory at once
- **Cache unbounded**: In-memory cache grows without eviction policy
Step-by-Step Fix
### 1. Confirm OOM kill
Check if the container was actually OOM killed:
```bash # Check container state docker inspect <container-id> --format='{{.State.OOMKilled}}' # Returns: true if OOM killed
# Check exit code docker inspect <container-id> --format='{{.State.ExitCode}}' # Returns: 137
# Check container logs (may be empty if killed mid-operation) docker logs <container-id> --tail 50 ```
### 2. Check current memory limits
```bash # View container memory limit docker inspect <container-id> --format='{{.HostConfig.Memory}}' # Returns bytes (0 means no limit)
# View container stats (if still running) docker stats <container-id>
# Check host memory free -h docker system df ```
### 3. Check system OOM killer logs
The kernel logs show which process was killed and why:
```bash # Check dmesg for OOM messages dmesg -T | grep -i "oom" dmesg -T | grep -i "killed"
# Check systemd journal journalctl -k --grep="oom" --since "1 hour ago"
# Check /var/log/messages (RHEL/CentOS) grep -i "oom" /var/log/messages ```
Typical OOM log entry:
``
Out of memory: Kill process 12345 (java) score 500 or sacrifice child
Killed process 12345 (java) total-vm:2048000kB, anon-rss:1536000kB
### 4. Increase container memory limit
If the limit is simply too low, increase it:
```bash # Run with higher memory limit docker run -d --memory=2g --memory-swap=2g <image>
# Update existing container (stop, remove, recreate) docker stop <container> docker rm <container> docker run -d --memory=2g --name <container> <image>
# Docker Compose # In docker-compose.yml: services: app: image: myapp deploy: resources: limits: memory: 2G reservations: memory: 1G ```
**Memory limit guidelines:** - Java (Spring Boot): 2-4GB typical - Node.js: 512MB-2GB depending on workload - Python: 256MB-1GB for most apps - Database (PostgreSQL, MySQL): 1-8GB depending on dataset
### 5. Configure JVM for containers
Java applications need special tuning for container memory:
```bash # Use container-aware JVM flags (Java 10+) docker run -d \ -e JAVA_TOOL_OPTIONS="-XX:MaxRAMPercentage=75.0" \ --memory=2g \ <java-image>
# For Java 8 docker run -d \ -Xmx1536m -Xms512m \ -XX:+UseContainerSupport \ -XX:ContainerRAMPercentage=75.0 \ --memory=2g \ <java-image> ```
**Key JVM memory flags:**
- -XX:MaxRAMPercentage: Max heap as % of container memory (default 25%)
- -XX:InitialRAMPercentage: Initial heap size
- -XX:+UseContainerSupport: Enable container awareness (Java 8u191+)
### 6. Configure Node.js memory limit
Node.js V8 heap needs explicit configuration:
```bash # Set max old space size docker run -d \ -e NODE_OPTIONS="--max-old-space-size=1536" \ --memory=2g \ <node-image>
# Or in application code const v8 = require('v8'); v8.setFlagsFromString('--max-old-space-size=1536'); ```
Rule of thumb: Set --max-old-space-size to ~75% of container memory limit.
### 7. Profile application memory usage
Find memory leaks or inefficient patterns:
**Java:** ```bash # Generate heap dump jmap -dump:format=b,file=heap.hprof <pid>
# Analyze with VisualVM or Eclipse MAT # Check for: Large collections, unclosed resources, cache growth ```
**Node.js:** ```bash # Generate heap snapshot node --inspect --expose-gc app.js # Connect Chrome DevTools and take heap snapshot
# Use clinic.js for profiling npm install -g clinic clinic doctor -- node app.js ```
**Python:** ```bash # Use memory_profiler pip install memory_profiler python -m memory_profiler app.py
# Use tracemalloc in code import tracemalloc tracemalloc.start() # ... run code ... snapshot = tracemalloc.take_snapshot() for stat in snapshot.statistics('lineno')[:10]: print(stat) ```
### 8. Add memory monitoring
Set up alerts before OOM occurs:
```bash # Monitor container memory in real-time docker stats --no-stream <container>
# Check memory usage via API docker exec <container> cat /sys/fs/cgroup/memory/memory.usage_in_bytes
# Prometheus metrics (with cAdvisor) # Query: container_memory_usage_bytes{name="container_name"} ```
Set alerts at 80% and 90% of memory limit.
### 9. Optimize application memory patterns
Common fixes for specific scenarios:
**Large file processing:** ```python # WRONG: Load entire file with open('large.csv') as f: data = f.read() # All in memory!
# CORRECT: Stream line by line with open('large.csv') as f: for line in f: process(line) ```
**Database queries:** ```python # WRONG: Fetch all rows rows = db.query("SELECT * FROM large_table").all()
# CORRECT: Use cursor/iterator rows = db.query("SELECT * FROM large_table").yield_per(1000) for row in rows: process(row) ```
**Caching:** ```python # WRONG: Unbounded cache cache = {} cache[key] = value # Grows forever
# CORRECT: LRU cache with max size from functools import lru_cache @lru_cache(maxsize=1000) def cached_func(key): return compute(key) ```
Prevention Checklist
- [ ] Set memory limits based on load testing, not guesses
- [ ] Configure JVM/Node.js heap limits for container constraints
- [ ] Implement streaming for large data processing
- [ ] Use bounded caches with eviction policies
- [ ] Add memory usage monitoring and alerting
- [ ] Profile memory usage before production deployment
- [ ] Set up OOM kill alerts in monitoring system
- [ ] Document memory requirements in deployment guides
Related Issues
- [Fix Docker Container OOMKilled](/articles/fix-docker-container-oomkilled)
- [Fix Docker Container Restarting](/articles/fix-docker-container-restarting)
- [Fix Docker Out of Disk Space](/articles/fix-docker-out-of-disk-space)
- [Fix Kubernetes OOMKilled](/articles/fix-kubernetes-oomkilled)