Your pod status shows CrashLoopBackOff with restart counts climbing higher. The container starts, crashes, Kubernetes restarts it, and the cycle repeats endlessly. CrashLoopBackOff is one of the most frustrating Kubernetes errors because it indicates your application is failing repeatedly, and finding the root cause requires digging into application logs and configuration.

Understanding CrashLoopBackOff

When a container exits with a non-zero status or is killed by the system, Kubernetes restarts it according to the restart policy. After repeated failures, Kubernetes increases the delay between restarts (backoff) to prevent resource waste. The status transitions from Running to Error to CrashLoopBackOff as failures accumulate.

Diagnosis Commands

Check pod status and restart count:

```bash # Get pod status kubectl get pods -n namespace

# Describe the pod kubectl describe pod pod-name -n namespace

# Check restart count kubectl get pod pod-name -n namespace -o jsonpath='{.status.containerStatuses[0].restartCount}' ```

Get container logs:

```bash # Current container logs kubectl logs pod-name -n namespace

# Previous container logs (from crashed instance) kubectl logs pod-name -n namespace --previous

# All containers in pod kubectl logs pod-name -n namespace --all-containers

# Stream logs kubectl logs pod-name -n namespace -f ```

Check events:

```bash # Pod events kubectl get events -n namespace --field-selector involvedObject.name=pod-name

# All recent events kubectl get events -n namespace --sort-by='.lastTimestamp' ```

Common Solutions

Solution 1: Fix Application Configuration Errors

Application misconfiguration is the most common cause:

bash
# Check previous logs for config errors
kubectl logs pod-name -n namespace --previous

Common configuration issues:

```yaml # Wrong environment variables spec: containers: - name: app env: - name: DATABASE_URL value: "postgres://wrong-host:5432/db" # Wrong hostname

# Missing required config env: - name: REQUIRED_VAR value: "" # Empty value causes crash

# Fix configuration env: - name: DATABASE_URL valueFrom: configMapKeyRef: name: app-config key: database-url ```

Check ConfigMaps and Secrets:

bash
# Verify config exists
kubectl get configmap app-config -n namespace -o yaml
kubectl get secret app-secret -n namespace -o yaml

Solution 2: Fix Missing Dependencies

Application may depend on unavailable services:

```bash # Check dependent services kubectl get services -n namespace

# Test connectivity from debug pod kubectl run debug --image=busybox --rm -it --restart=Never -- nslookup database-service

# Check if service endpoints exist kubectl get endpoints database-service -n namespace ```

Add init containers for dependency checks:

yaml
spec:
  initContainers:
  - name: wait-for-db
    image: busybox
    command: ['sh', '-c', 'until nslookup database-service; do echo waiting; sleep 2; done']
  containers:
  - name: app
    image: myapp:v1

Solution 3: Fix Resource Limits

Container may be killed due to resource limits:

```bash # Check if container was OOMKilled kubectl describe pod pod-name -n namespace | grep -i OOMKilled

# Check last state kubectl get pod pod-name -n namespace -o jsonpath='{.status.containerStatuses[0].lastState}' ```

Fix memory limits:

```yaml # Container exceeding memory limit resources: limits: memory: "128Mi" # Too low, causing OOMKilled

# Increase limits based on actual usage resources: limits: memory: "512Mi" requests: memory: "256Mi" ```

Check actual memory usage:

```bash # Get resource usage kubectl top pod pod-name -n namespace

# Memory metrics kubectl top pods -n namespace --sort-by=memory ```

Solution 4: Fix Application Startup Issues

Application may crash during initialization:

bash
# Check for startup errors
kubectl logs pod-name -n namespace --previous | head -50

Common startup issues and fixes:

```yaml # Missing startup probe causing premature restarts spec: containers: - name: app startupProbe: httpGet: path: /health port: 8080 failureThreshold: 30 periodSeconds: 10

# Adjust liveness probe delay livenessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 60 # Give app time to start periodSeconds: 10 ```

Solution 5: Fix Command and Entrypoint Issues

Wrong container command causes immediate crash:

bash
# Check what command is running
kubectl get pod pod-name -n namespace -o jsonpath='{.spec.containers[0].command}'
kubectl get pod pod-name -n namespace -o jsonpath='{.spec.containers[0].args}'

Fix command in pod spec:

```yaml # If container needs different command spec: containers: - name: app image: myapp:v1 command: ["/app/start.sh"] # Override default args: ["--config", "/etc/app/config.yaml"]

# Or keep container running for debugging command: ["/bin/sh", "-c", "sleep 3600"] ```

Solution 6: Fix File Permission Issues

Application cannot access required files:

bash
# Check for permission errors in logs
kubectl logs pod-name -n namespace --previous | grep -i "permission denied"

Fix security context:

yaml
spec:
  containers:
  - name: app
    securityContext:
      runAsUser: 1000
      runAsGroup: 1000
      fsGroup: 1000

Or fix volume mount permissions:

yaml
volumeMounts:
- name: data
  mountPath: /data
  subPath: appdata  # Use subPath if needed

Solution 7: Fix Volume Mount Issues

Missing or incorrect volume mounts:

bash
# Check volume mount errors
kubectl describe pod pod-name -n namespace | grep -i "mount"
kubectl logs pod-name -n namespace --previous | grep -i "no such file"

Fix volume configuration:

yaml
spec:
  volumes:
  - name: config
    configMap:
      name: app-config
  - name: data
    persistentVolumeClaim:
      claimName: app-pvc
  containers:
  - name: app
    volumeMounts:
    - name: config
      mountPath: /etc/app/config
      readOnly: true
    - name: data
      mountPath: /data

Solution 8: Debug with Ephemeral Container

For complex debugging, use ephemeral containers:

```bash # Add debug container to running pod kubectl debug pod-name -n namespace --image=busybox --target=app-container

# Or copy pod and modify for debugging kubectl debug pod-name -n namespace --copy-to=debug-pod --image=busybox --share-processes

# Attach to debug container kubectl attach debug-pod -c debugger -n namespace -it ```

Verification

After fixing the issue:

```bash # Watch pod restart kubectl get pod pod-name -n namespace -w

# Check restart count stops increasing kubectl get pod pod-name -n namespace -o jsonpath='{.status.containerStatuses[0].restartCount}'

# Verify application works kubectl logs pod-name -n namespace -f kubectl exec -it pod-name -n namespace -- curl localhost:8080/health ```

Common CrashLoopBackOff Causes

CauseSymptomsSolution
Config errorError in logs about configFix environment variables
Missing dependencyConnection refused in logsAdd init container checks
OOMKilledlastState.reason: OOMKilledIncrease memory limits
Liveness probe too earlyApp killed before readyIncrease initialDelaySeconds
Permission deniedFile access errorsFix securityContext
Missing volumeNo such file errorsCheck volumeMounts

Prevention Best Practices

Set appropriate resource limits based on actual usage. Use startup probes for slow-starting applications. Configure meaningful liveness/readiness probes. Include proper error handling in application code. Use init containers to verify dependencies. Test configuration thoroughly before deployment.

CrashLoopBackOff errors require examining the crashed container's logs and state. The --previous flag on kubectl logs is essential for seeing why the container failed before Kubernetes restarted it.