# Docker Container Exit Code 1: Immediate Exit Troubleshooting Guide
You've started a container, but it exits almost immediately with exit code 1. No logs, no clues—just a dead container. This is one of the most frustrating Docker issues because the container disappears before you can investigate.
Exit code 1 typically indicates a general application error. Unlike exit code 0 (success) or exit code 137 (killed by OOM), code 1 means something went wrong inside your application or its startup process.
Common Causes
When a container exits with code 1, one of these scenarios is usually responsible:
- Application crash during initialization - Your app throws an unhandled exception
- Missing configuration files - Required config files aren't in the expected location
- Environment variable issues - Required env vars are missing or malformed
- Dependency failures - Database connections, external APIs, or services unavailable
- File permission problems - The container user can't read/execute necessary files
- Invalid command or entrypoint - The CMD or ENTRYPOINT is incorrect
Diagnosing the Problem
Check Container Status and Exit Code
First, verify the exit code and see how recently the container exited:
docker ps -a --filter "status=exited" --format "table {{.Names}}\t{{.Status}}\t{{.Image}}"Look for containers that show "Exited (1)" in the status. The timestamp tells you if this is a recurring issue.
View Container Logs
The most important step is checking what the container tried to output before dying:
docker logs <container_name>If the container exits too fast, you might miss logs. Try:
docker logs --tail 100 <container_name>
docker logs --since 5m <container_name>If there are no logs at all, the application might be silently crashing or writing to a different output stream.
Inspect Container Configuration
See exactly how the container was configured:
docker inspect <container_name> --format '{{json .Config}}' | jqThis shows the entrypoint, command, environment variables, and volumes. Pay attention to:
- Cmd and Entrypoint - Are they correct?
- Env - Are required environment variables set?
- WorkingDir - Is the working directory correct?
Run Interactively for Debugging
When logs don't help, run the container interactively:
docker run -it --entrypoint /bin/sh <image_name>For Alpine-based images:
docker run -it --entrypoint /bin/ash <image_name>Once inside, you can:
- Check if files exist where expected
- Test running the application manually
- Verify environment variables with env
- Check file permissions with ls -la
Common Fixes
Fix 1: Missing Environment Variables
If your application requires specific environment variables:
docker run -d \
-e DATABASE_URL=postgres://user:pass@host:5432/db \
-e API_KEY=your-api-key \
--name myapp \
myimage:latestFor Docker Compose, ensure variables are defined in docker-compose.yml or an .env file:
services:
myapp:
image: myimage:latest
environment:
- DATABASE_URL=${DATABASE_URL}
env_file:
- .envFix 2: Incorrect Entrypoint or Command
Sometimes the Dockerfile specifies a wrong entrypoint. Override it:
docker run -d --entrypoint "" myimage:latest /app/start.shOr fix the Dockerfile directly:
```dockerfile # Wrong ENTRYPOINT ["./app"] CMD ["--wrong-flag"]
# Correct ENTRYPOINT ["./app"] CMD ["--correct-flag", "--port", "8080"] ```
Fix 3: Missing Configuration Files
If your app needs config files that aren't in the image:
docker run -d \
-v /host/path/config.yml:/app/config.yml:ro \
--name myapp \
myimage:latestVerify the mount is correct:
docker exec <container_name> ls -la /app/config.ymlFix 4: Dependency Connection Issues
If your app fails because it can't reach a database or service:
- 1.Ensure the dependency is running first
- 2.Use proper networking in Docker Compose:
```yaml services: db: image: postgres:15 healthcheck: test: ["CMD-SHELL", "pg_isready -U postgres"] interval: 5s timeout: 5s retries: 5
app: image: myapp:latest depends_on: db: condition: service_healthy ```
Fix 5: File Permission Issues
If the container runs as a non-root user and can't access files:
```bash # Check current user docker run --rm myimage:latest whoami
# Fix permissions on host chmod -R 755 ./config chown -R 1000:1000 ./data ```
Or run as root temporarily for testing (not recommended for production):
docker run -d --user root myimage:latestVerification Steps
After applying your fix, verify the container stays running:
```bash # Start the container docker run -d --name test_container myimage:latest
# Wait a moment and check status sleep 5 docker ps --filter "name=test_container"
# Check logs docker logs test_container
# Verify uptime docker inspect test_container --format '{{.State.StartedAt}}' ```
If the container is still running after a few minutes, your fix worked.
Prevention Tips
To avoid exit code 1 issues in the future:
- Always include health checks in your Dockerfile
- Use proper signal handling in your application
- Set a reasonable restart policy:
--restart on-failure:3 - Log to stdout/stderr instead of files for Docker to capture
- Validate configuration at startup with meaningful error messages
- Use dependency checks in Docker Compose with
depends_onandhealthcheck
Exit code 1 is frustrating but almost always solvable with systematic debugging. Start with logs, then inspect configuration, and finally run interactively to isolate the root cause.