What's Actually Happening

Your Kubernetes pod has multiple containers designed to work together, but the sidecar container fails to start while the main container runs fine. This breaks the expected auxiliary functionality like logging, monitoring, or proxy services that the sidecar should provide.

The Error You'll See

Pod status shows sidecar stuck:

```bash $ kubectl get pod myapp-pod -o wide

NAME READY STATUS RESTARTS AGE myapp-pod 1/2 CrashLoopBackOff 5 10m ```

Container-specific status:

```bash $ kubectl describe pod myapp-pod

Containers: main-container: State: Running Ready: True sidecar-container: State: Waiting Reason: CrashLoopBackOff Message: container "sidecar" is waiting to start Exit Code: 137 ```

Logs reveal the issue:

```bash $ kubectl logs myapp-pod -c sidecar-container

standard_init_linux.go:190: exec user process caused "no such file or directory" # OR Error: cannot connect to shared volume at /var/log/app # OR OOMKilled ```

Why This Happens

  1. 1.Missing shared volume - Sidecar can't access files from main container
  2. 2.Init container blocking - Init container waiting for conditions sidecar doesn't meet
  3. 3.Resource limits too low - Sidecar gets OOMKilled before completing startup
  4. 4.Image architecture mismatch - ARM image on AMD node or vice versa
  5. 5.Startup dependency race - Sidecar starts before main container creates needed files
  6. 6.Network policy isolation - Sidecar blocked from reaching needed services
  7. 7.Wrong entrypoint/command - Container starts with invalid binary path
  8. 8.SELinux/AppArmor blocking - Security context prevents sidecar operations

Step 1: Check Pod and Container Status

```bash # Get detailed pod status: kubectl get pod myapp-pod -o yaml

# Check container states: kubectl describe pod myapp-pod | grep -A 20 "Containers:" | grep -A 10 "sidecar"

# Check events: kubectl get events --field-selector involvedObject.name=myapp-pod

# Check container logs: kubectl logs myapp-pod -c sidecar-container --previous kubectl logs myapp-pod -c sidecar-container --tail=100

# Check all containers: kubectl logs myapp-pod --all-containers

# Check init container status: kubectl describe pod myapp-pod | grep -A 20 "Init Containers:" ```

Step 2: Verify Shared Volume Configuration

```yaml # Check if volumes are shared correctly: apiVersion: v1 kind: Pod metadata: name: myapp-pod spec: containers: - name: main-container image: myapp:latest volumeMounts: - name: shared-logs mountPath: /var/log/app - name: sidecar-container image: log-collector:latest volumeMounts: - name: shared-logs mountPath: /var/log/app # Must match main container mount readOnly: false # Sidecar needs write access volumes: - name: shared-logs emptyDir: {} # Shared between all containers in pod

# Verify volume exists in pod: kubectl get pod myapp-pod -o jsonpath='{.spec.volumes}' | jq

# Check mount points: kubectl get pod myapp-pod -o jsonpath='{.spec.containers[1].volumeMounts}' | jq ```

Step 3: Fix Init Container Ordering

```yaml # If using init containers, ensure sidecar dependencies are met: apiVersion: v1 kind: Pod spec: initContainers: - name: init-config image: busybox command: ['sh', '-c', 'mkdir -p /shared/config && cp /config/* /shared/config/'] volumeMounts: - name: shared-config mountPath: /shared/config containers: - name: main-container volumeMounts: - name: shared-config mountPath: /etc/app/config - name: sidecar-container volumeMounts: - name: shared-config mountPath: /shared/config

# Check init container logs: kubectl logs myapp-pod -c init-config

# If init container is hanging: kubectl describe pod myapp-pod | grep -A 10 "Init Containers"

# Force delete stuck init: kubectl delete pod myapp-pod --force --grace-period=0 ```

Step 4: Adjust Resource Limits for Sidecar

```yaml # Sidecar needs adequate resources: apiVersion: v1 kind: Pod spec: containers: - name: main-container resources: requests: cpu: "100m" memory: "256Mi" limits: cpu: "500m" memory: "512Mi" - name: sidecar-container resources: requests: cpu: "50m" memory: "128Mi" # Too low = OOMKilled limits: cpu: "200m" memory: "256Mi" # Increase if sidecar crashes

# Check current resource usage: kubectl top pod myapp-pod --containers

# Check resource quotas: kubectl describe resourcequota -n default

# If OOMKilled, increase memory: kubectl patch pod myapp-pod --type='json' -p='[ {"op": "replace", "path": "/spec/containers/1/resources/limits/memory", "value":"512Mi"} ]' ```

Step 5: Fix Startup Dependency Race

```yaml # Use startupProbe to ensure sidecar waits for main container: apiVersion: v1 kind: Pod spec: containers: - name: main-container startupProbe: exec: command: ['sh', '-c', 'test -f /var/log/app/app.log'] initialDelaySeconds: 5 periodSeconds: 5 failureThreshold: 30 - name: sidecar-container startupProbe: exec: command: ['sh', '-c', 'test -f /var/log/app/app.log'] initialDelaySeconds: 10 # Delay start until main creates file periodSeconds: 5 failureThreshold: 30

# Or use postStart hook in main container: apiVersion: v1 kind: Pod spec: containers: - name: main-container lifecycle: postStart: exec: command: ['sh', '-c', 'mkdir -p /var/log/app && touch /var/log/app/app.log']

# Check probe status: kubectl describe pod myapp-pod | grep -A 10 "Startup Probe" ```

Step 6: Verify Image and Architecture

```bash # Check sidecar image: kubectl get pod myapp-pod -o jsonpath='{.spec.containers[1].image}'

# Pull image locally to inspect: docker pull log-collector:latest docker inspect log-collector:latest | jq '.Architecture'

# Check node architecture: kubectl get nodes -o jsonpath='{.items[*].status.nodeInfo.architecture}'

# If mismatch, use multi-arch image: # In pod spec: image: log-collector:v2@sha256:abc123 # Use specific digest for multi-arch

# Or specify platform: kubectl run test-sidecar --image=log-collector:latest --overrides=' { "spec": { "containers": [{ "name": "sidecar", "image": "log-collector:latest", "imagePullPolicy": "Always" }] } }' ```

Step 7: Check Network Policy Configuration

```yaml # Sidecar might need network access: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-sidecar spec: podSelector: matchLabels: app: myapp policyTypes: - Ingress - Egress egress: - to: - podSelector: matchLabels: component: log-aggregator ports: - protocol: TCP port: 9090

# Check if network policy blocks sidecar: kubectl get networkpolicy -A kubectl describe networkpolicy allow-sidecar

# Temporarily disable for testing: kubectl delete networkpolicy allow-sidecar

# Check DNS resolution from sidecar: kubectl exec myapp-pod -c sidecar-container -- nslookup log-aggregator ```

Step 8: Fix Security Context Issues

```yaml # Ensure sidecar has correct permissions: apiVersion: v1 kind: Pod spec: securityContext: runAsUser: 1000 runAsGroup: 1000 fsGroup: 1000 # Group for volume access containers: - name: sidecar-container securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: false # If sidecar needs to write capabilities: drop: ['ALL']

# Check current security context: kubectl get pod myapp-pod -o jsonpath='{.spec.securityContext}' | jq

# If SELinux/AppArmor blocking: kubectl exec myapp-pod -c sidecar-container -- ls -la /var/log/app

# Check audit logs: kubectl logs -n kube-system daemonset/selinux-audit ```

Step 9: Validate Entrypoint and Command

```bash # Check container entrypoint: kubectl get pod myapp-pod -o jsonpath='{.spec.containers[1].command}' kubectl get pod myapp-pod -o jsonpath='{.spec.containers[1].args}'

# If entrypoint wrong, override in pod spec: spec: containers: - name: sidecar-container image: log-collector:latest command: ['/bin/sh', '-c'] # Override default entrypoint args: ['while true; do cat /var/log/app/*.log; sleep 5; done']

# Test entrypoint locally: docker run --rm log-collector:latest --entrypoint='/bin/sh' -c 'ls /'

# Debug by running interactive: kubectl debug myapp-pod -it --image=busybox --target=sidecar-container ```

Step 10: Implement Sidecar Pattern Best Practices

```yaml # Proper sidecar configuration: apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: # Shared security context securityContext: fsGroup: 1000

# Shared volumes first volumes: - name: shared-logs emptyDir: {} - name: shared-config configMap: name: app-config

# Init containers for setup initContainers: - name: init-logs image: busybox:1.36 command: ['sh', '-c', 'mkdir -p /var/log/app && chmod 777 /var/log/app'] volumeMounts: - name: shared-logs mountPath: /var/log/app

# Main container containers: - name: main-container image: myapp:v1 ports: - containerPort: 8080 volumeMounts: - name: shared-logs mountPath: /var/log/app - name: shared-config mountPath: /etc/config resources: requests: cpu: "100m" memory: "256Mi" limits: cpu: "500m" memory: "512Mi" readinessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 5 periodSeconds: 10

# Sidecar container - name: sidecar-container image: fluentbit:v3 volumeMounts: - name: shared-logs mountPath: /var/log/app readOnly: true # Sidecar reads, doesn't write env: - name: LOG_PATH value: "/var/log/app" resources: requests: cpu: "50m" memory: "128Mi" limits: cpu: "200m" memory: "256Mi" startupProbe: exec: command: ['sh', '-c', 'test -d /var/log/app'] initialDelaySeconds: 5 periodSeconds: 5 failureThreshold: 10

# Monitor both containers: kubectl get pod myapp-pod -w ```

Kubernetes Sidecar Checklist

CheckCommandExpected
Pod statuskubectl get pod2/2 Ready
Container logskubectl logs -c sidecarNo errors
Volume mountsdescribe podShared paths match
Resourceskubectl top podWithin limits
Networkexec nslookupResolves services
Security contextget pod -o yamlfsGroup matches
Probesdescribe podProbes passing

Verify the Fix

```bash # After fixing configuration:

# 1. Apply updated pod spec kubectl apply -f myapp-pod.yaml // Pod created successfully

# 2. Check both containers running kubectl get pod myapp-pod // READY: 2/2, STATUS: Running

# 3. Verify sidecar logs kubectl logs myapp-pod -c sidecar-container // No crash errors, collecting logs

# 4. Test shared volume kubectl exec myapp-pod -c main-container -- echo "test" > /var/log/app/test.log kubectl exec myapp-pod -c sidecar-container -- cat /var/log/app/test.log // Output: test

# 5. Check resource usage kubectl top pod myapp-pod --containers // Sidecar using reasonable memory

# 6. Monitor for stability kubectl logs -f myapp-pod -c sidecar-container // Continuous log collection, no restarts

# 7. Verify sidecar functionality kubectl exec myapp-pod -c sidecar-container -- ps aux // Sidecar process running correctly ```

  • [Fix Kubernetes Pod Stuck in ContainerCreating](/articles/fix-kubernetes-pod-stuck-in-containercreating-state)
  • [Fix Kubernetes Container Runtime Not Ready](/articles/fix-kubernetes-container-runtime-not-ready)
  • [Fix Kubernetes Init Container Failure](/articles/fix-kubernetes-init-container-failure)