You deployed your application to Kubernetes, but when you check the pod status, it's perpetually stuck at Pending. No containers are running, and your application never starts. Pods stuck in Pending state are one of the most common Kubernetes issues, and the causes range from insufficient cluster resources to misconfigured scheduling constraints.

Understanding Pod Pending State

When a pod is in Pending state, it means the Kubernetes scheduler has accepted the pod but hasn't assigned it to a node. This could be because no node meets the scheduling requirements, resources are insufficient, or there's a persistent volume issue. The scheduler continuously attempts to place the pod, but something prevents successful assignment.

Diagnosis Commands

Start by checking the pod's detailed status:

```bash # Get pod status kubectl get pods -n namespace

# Describe the pending pod kubectl describe pod pod-name -n namespace

# Check events specifically kubectl get events -n namespace --sort-by='.lastTimestamp' ```

Look at scheduler logs:

```bash # Check kube-scheduler logs kubectl logs -n kube-system kube-scheduler-master -c kube-scheduler

# Or if scheduler is a deployment kubectl logs -n kube-system -l component=kube-scheduler ```

Check cluster resources:

```bash # View node resources kubectl describe nodes

# Check resource allocation kubectl top nodes kubectl top pods -n namespace

# View available capacity kubectl get nodes -o custom-columns='NAME:.metadata.name,CPU:.status.capacity.cpu,MEMORY:.status.capacity.memory' ```

Common Solutions

Solution 1: Fix Resource Constraints

The most common cause is insufficient CPU or memory:

bash
# Check pod resource requests
kubectl get pod pod-name -n namespace -o jsonpath='{.spec.containers[*].resources.requests}'

If requests are too high:

yaml
# Reduce resource requests
resources:
  requests:
    cpu: "100m"    # Was: "2000m" - too high
    memory: "128Mi"  # Was: "4Gi" - too high
  limits:
    cpu: "500m"
    memory: "256Mi"

Check node allocatable resources:

bash
# View node allocatable resources
kubectl describe node node-name | grep -A 5 Allocatable

Solution 2: Fix Node Selector and Affinity Issues

If the pod has specific node requirements that no node satisfies:

bash
# Check pod's node requirements
kubectl get pod pod-name -n namespace -o yaml | grep -A 10 nodeSelector
kubectl get pod pod-name -n namespace -o yaml | grep -A 20 affinity

Verify node labels:

```bash # List node labels kubectl get nodes --show-labels

# Check specific label kubectl get nodes -l label-key=label-value ```

Add missing labels to nodes:

```bash # Label a node to match pod's nodeSelector kubectl label node node-name disktype=ssd

# Verify label was added kubectl get node node-name --show-labels ```

Fix node affinity configuration:

yaml
# Ensure affinity rules match existing nodes
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype
            operator: In
            values:
            - ssd  # Must match an actual node label

Solution 3: Fix Taints and Tolerations

Nodes may have taints that repel pods without matching tolerations:

```bash # Check node taints kubectl describe nodes | grep -A 2 Taints

# Common taints: # node.kubernetes.io/not-ready: NoSchedule # node.kubernetes.io/unreachable: NoSchedule # node.kubernetes.io/memory-pressure: NoSchedule ```

Add toleration to pod:

yaml
spec:
  tolerations:
  - key: "node.kubernetes.io/memory-pressure"
    operator: "Exists"
    effect: "NoSchedule"
  - key: "dedicated"
    operator: "Equal"
    value: "gpu"
    effect: "NoSchedule"

Remove unnecessary taints:

```bash # Remove taint from node kubectl taint nodes node-name key:value:NoSchedule-

# Example: Remove not-ready taint kubectl taint nodes node-name node.kubernetes.io/not-ready:NoSchedule- ```

Solution 4: Fix Persistent Volume Claim Issues

If the pod uses PVCs that aren't bound:

```bash # Check PVC status kubectl get pvc -n namespace

# Describe the PVC kubectl describe pvc pvc-name -n namespace ```

Common PVC issues:

```bash # No available PV kubectl get pv

# Storage class doesn't exist kubectl get storageclass

# Create missing storage class kubectl apply -f - <<EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: standard provisioner: kubernetes.io/aws-ebs parameters: type: gp2 EOF ```

Fix PVC configuration:

yaml
# Ensure storage class exists
spec:
  storageClassName: standard  # Must match existing StorageClass
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Solution 5: Fix Pod Anti-Affinity Blocking Scheduling

Strict anti-affinity rules may prevent scheduling:

yaml
# Problematic strict anti-affinity
spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchLabels:
            app: myapp
        topologyKey: kubernetes.io/hostname

If you have only one node and 2 replicas, this prevents scheduling. Switch to preferred:

yaml
# Use preferred (soft) anti-affinity
spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app: myapp
          topologyKey: kubernetes.io/hostname

Solution 6: Fix PriorityClass Issues

Lower priority pods may be preempted or wait indefinitely:

```bash # Check priority classes kubectl get priorityclasses

# Check pod priority kubectl get pod pod-name -n namespace -o yaml | grep priorityClassName ```

Create or update priority class:

yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: high-priority
value: 1000000
globalDefault: false
description: "High priority class for critical pods"

Assign to pod:

yaml
spec:
  priorityClassName: high-priority

Solution 7: Fix Namespace Resource Quotas

Resource quotas may limit available resources:

```bash # Check resource quotas kubectl get resourcequota -n namespace

# Describe quota kubectl describe resourcequota quota-name -n namespace ```

If quota is exhausted:

yaml
# Increase quota limits
apiVersion: v1
kind: ResourceQuota
metadata:
  name: compute-quota
  namespace: mynamespace
spec:
  hard:
    requests.cpu: "10"      # Increase from 4
    requests.memory: 20Gi   # Increase from 8Gi
    limits.cpu: "20"
    limits.memory: 40Gi

Solution 8: Check Node Conditions

Nodes may be in unhealthy state:

```bash # Check node conditions kubectl get nodes kubectl describe node node-name | grep -A 10 Conditions

# Common conditions to check: # Ready, MemoryPressure, DiskPressure, PIDPressure, NetworkUnavailable ```

Fix node issues:

```bash # If node is NotReady, check kubelet sudo systemctl status kubelet sudo journalctl -u kubelet -f

# Restart kubelet if needed sudo systemctl restart kubelet

# Check node resources df -h # Disk space free -m # Memory ```

Verification

After applying fixes:

```bash # Watch pod status kubectl get pod pod-name -n namespace -w

# Check pod events kubectl describe pod pod-name -n namespace | tail -20

# Verify pod is running kubectl get pods -n namespace ```

Common Pending Causes Summary

CauseSymptomsSolution
Insufficient resourcesEvents show "Insufficient cpu/memory"Reduce requests or add nodes
Node selector mismatch"0/3 nodes match node selector"Add labels or fix selector
Taint blocking"0/3 nodes are available: 3 node(s) had taints"Add tolerations or remove taints
PVC not bound"PersistentVolumeClaim not bound"Create PV or fix StorageClass
Anti-affinity too strict"0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules"Use preferred anti-affinity
Quota exceeded"exceeded quota"Increase quota or reduce requests

Prevention Best Practices

Set realistic resource requests based on actual usage. Use preferred affinity/anti-affinity instead of required where possible. Ensure StorageClasses exist before deploying stateful apps. Monitor cluster capacity and add nodes proactively. Use priority classes for critical workloads. Regularly check node health and resource usage.

Pod Pending issues are almost always solvable by examining the pod's requirements against cluster capacity. The kubectl describe pod command is your most valuable diagnostic tool - it shows exactly why the scheduler cannot place the pod.