You're trying to drain a node or evict a pod, but the operation is blocked by PodDisruptionBudget (PDB). PDBs protect application availability during voluntary disruptions, but overly strict settings can prevent necessary maintenance operations. Understanding how PDBs work is essential for balancing availability and operational flexibility.
Understanding PodDisruptionBudget
PDB limits the number of pods of a replicated application that can be down simultaneously during voluntary disruptions. Voluntary disruptions include node drains, pod evictions, and cluster scaling operations. PDB doesn't protect against involuntary disruptions like node failures or pod crashes.
PDB specifies either minAvailable (minimum healthy pods) or maxUnavailable (maximum pods that can be down). When a disruption would violate the PDB, the operation is blocked until enough pods become healthy.
Diagnosis Commands
Check PDB status:
```bash # List PodDisruptionBudgets kubectl get pdb -n namespace
# Get PDB details kubectl describe pdb pdb-name -n namespace
# Check PDB status fields kubectl get pdb pdb-name -n namespace -o yaml | grep -A 10 status ```
Check allowed disruptions:
```bash # Check how many disruptions are allowed kubectl get pdb pdb-name -n namespace -o jsonpath='{.status.disruptionsAllowed}'
# If 0, no evictions allowed
# Check current healthy pods kubectl get pdb pdb-name -n namespace -o jsonpath='{.status.currentHealthy}'
# Check desired minimum kubectl get pdb pdb-name -n namespace -o jsonpath='{.status.desiredHealthy}' ```
Check deployment/replicas:
```bash # Check deployment replicas kubectl get deployment deployment-name -n namespace
# Check pod status kubectl get pods -n namespace -l app=app-label
# Count healthy (ready) pods kubectl get pods -n namespace -l app=app-label -o jsonpath='{.items[?(@.status.conditions[?(@.type=="Ready")].status=="True")].metadata.name}' | wc -w ```
Common Solutions
Solution 1: Fix minAvailable Too Strict
minAvailable might be too high relative to replicas:
```bash # Check PDB configuration kubectl get pdb pdb-name -n namespace -o yaml | grep -A 5 "minAvailable|maxUnavailable"
# Example: 3 replicas, minAvailable: 3 # This means no disruptions allowed at all! ```
Fix minAvailable:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: my-pdb
spec:
minAvailable: 2 # Changed from 3 - allows 1 disruption
selector:
matchLabels:
app: myappUse percentage for flexibility:
spec:
minAvailable: "50%" # 50% of pods must remain
# For 4 replicas: 2 must remain, 2 can be disruptedSolution 2: Switch to maxUnavailable
maxUnavailable is often more intuitive:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: my-pdb
spec:
maxUnavailable: 1 # Allow 1 pod to be disrupted
selector:
matchLabels:
app: myappUse percentage:
spec:
maxUnavailable: "25%" # Allow 25% of pods to be down
# For 4 replicas: 1 can be disruptedSolution 3: Increase Deployment Replicas
More replicas give more disruption headroom:
```bash # Check current replicas kubectl get deployment deployment-name -n namespace
# Scale up deployment kubectl scale deployment deployment-name -n namespace --replicas=5
# Now PDB has more pods to work with kubectl describe pdb pdb-name -n namespace ```
Example: - PDB: minAvailable: 2 - Before: 3 replicas (allows 1 disruption) - After: 5 replicas (allows 3 disruptions)
Solution 4: Fix PDB Pod Selector
PDB must select the right pods:
```bash # Check PDB selector kubectl get pdb pdb-name -n namespace -o jsonpath='{.spec.selector}'
# Check pod labels kubectl get pods -n namespace --show-labels
# Verify pods are selected kubectl get pods -n namespace -l app=pdb-label ```
Fix selector:
spec:
selector:
matchLabels:
app: correct-label # Must match deployment pod labelsSolution 5: Wait for Healthy Pods
Disruptions are blocked when pods aren't ready:
```bash # Check pod readiness kubectl get pods -n namespace -l app=app-label
# Count ready pods kubectl get pods -n namespace -l app=app-label | grep "1/1.*Running" | wc -l
# If pods are starting or unhealthy, wait kubectl get pods -n namespace -l app=app-label -w ```
Fix unhealthy pods:
```bash # Check pod issues kubectl describe pod unhealthy-pod -n namespace
# Check pod logs kubectl logs unhealthy-pod -n namespace
# Fix underlying issue (image, config, resources, etc.) ```
Solution 6: Check for Multiple PDBs
Multiple PDBs might affect same pods:
```bash # List all PDBs kubectl get pdb -n namespace
# Check if multiple PDBs select same pods for pdb in $(kubectl get pdb -n namespace -o name); do echo "=== $pdb ===" kubectl get $pdb -n namespace -o jsonpath='{.spec.selector}' echo "" done ```
Remove conflicting PDBs:
# Delete conflicting PDB
kubectl delete pdb conflicting-pdb -n namespaceSolution 7: Force Drain When Necessary
In emergencies, you can force drain:
```bash # Normal drain (respects PDB) kubectl drain node-name --ignore-daemonsets --delete-emptydir-data
# Force drain (ignores PDB - dangerous!) kubectl drain node-name --ignore-daemonsets --delete-emptydir-data --disable-eviction
# Or delete pods directly (bypasses PDB) kubectl delete pod pod-name -n namespace --force --grace-period=0 ```
Note: Force operations reduce availability and should only be used in emergencies.
Solution 8: Temporarily Delete PDB
Remove PDB temporarily for maintenance:
```bash # Save PDB config kubectl get pdb pdb-name -n namespace -o yaml > pdb-backup.yaml
# Delete PDB kubectl delete pdb pdb-name -n namespace
# Perform maintenance kubectl drain node-name --ignore-daemonsets --delete-emptydir-data
# Recreate PDB after maintenance kubectl apply -f pdb-backup.yaml ```
Solution 9: Update PDB During Maintenance
Modify PDB for maintenance window:
```bash # Temporarily relax PDB kubectl patch pdb pdb-name -n namespace -p '{"spec":{"minAvailable":1}}'
# Or kubectl patch pdb pdb-name -n namespace -p '{"spec":{"maxUnavailable":2}}'
# Perform maintenance
# Restore original settings kubectl patch pdb pdb-name -n namespace -p '{"spec":{"minAvailable":2}}' ```
Solution 10: Check Eviction API
Evictions go through the eviction API:
```bash # Try eviction to see error kubectl eviction pod pod-name -n namespace --dry-run=client
# Or create eviction manually kubectl create eviction pod pod-name -n namespace --dry-run=client -o yaml ```
Check eviction status:
# Check if eviction would succeed
kubectl get pdb pdb-name -n namespace -o yaml | grep -A 5 "disruptionsAllowed"Verification
After fixing PDB issues:
```bash # Check PDB allows disruptions kubectl get pdb pdb-name -n namespace
# Check disruptionsAllowed > 0 kubectl get pdb pdb-name -n namespace -o jsonpath='{.status.disruptionsAllowed}'
# Test drain kubectl drain node-name --ignore-daemonsets --delete-emptydir-data --dry-run
# Verify pod health kubectl get pods -n namespace -l app=app-label ```
PDB Configuration Examples
Single Replica Application
# For single replica, can't allow disruption
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: single-replica-pdb
spec:
minAvailable: 1 # Must always have 1
selector:
matchLabels:
app: critical-app
# This blocks all voluntary disruptions
# Consider scaling to 2+ replicas for maintenance flexibilityHigh Availability Application
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: ha-pdb
spec:
maxUnavailable: 1 # Allow 1 disruption
selector:
matchLabels:
app: ha-appPercentage-based PDB
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: percentage-pdb
spec:
minAvailable: "50%" # Scale-friendly
selector:
matchLabels:
app: scalable-appPDB Blocking Causes Summary
| Cause | Check | Solution |
|---|---|---|
| minAvailable = replicas | kubectl get pdb -o yaml | Reduce minAvailable |
| Pods not ready | kubectl get pods | Fix unhealthy pods |
| Too few replicas | kubectl get deployment | Scale deployment |
| Wrong pod selector | kubectl get pdb -o yaml | Fix selector |
| Multiple PDBs | kubectl get pdb | Remove conflicts |
| Percentage too high | Check minAvailable % | Adjust percentage |
Prevention Best Practices
Set realistic minAvailable/maxUnavailable values. Use percentages for PDB in scalable deployments. Ensure deployment has enough replicas for PDB headroom. Test drain operations before production maintenance. Monitor PDB status and healthy pod count. Have emergency procedures for forced operations. Document PDB purpose and expected maintenance impact.
PDB blocking usually means minAvailable is too strict relative to replicas, or pods aren't healthy enough. The kubectl describe pdb command shows exactly why disruptions are blocked - check disruptionsAllowed and currentHealthy to understand the constraint.