Introduction
DaemonSets are supposed to give you one Pod per eligible node, but “eligible” is controlled by labels, taints, affinity, and node readiness. When a DaemonSet misses nodes, the common assumption is that the DaemonSet controller is malfunctioning. More often, the real problem is that the selector logic in the Pod template no longer matches the live node labels or the target nodes are blocked by taints the DaemonSet never tolerated.
Symptoms
- The DaemonSet desired count is lower than expected
- Some nodes never receive a DaemonSet Pod
- New nodes join the cluster but do not get the DaemonSet workload
- Scheduling events point to selector, affinity, or taint constraints
Common Causes
- Node labels do not match the DaemonSet
nodeSelector - Required node affinity is too narrow or outdated
- Taints prevent scheduling and the DaemonSet lacks matching tolerations
- Operators expect all nodes to match a label that only some of them actually have
Step-by-Step Fix
- 1.Check the DaemonSet scheduling constraints
- 2.Review
nodeSelector, node affinity, and tolerations together because any one of them can eliminate nodes from eligibility.
kubectl get daemonset my-daemonset -o yaml- 1.Inspect real node labels and taints
- 2.Do not trust memory or provisioning docs. Verify what labels the nodes actually have now.
kubectl get nodes --show-labels
kubectl describe node my-node- 1.Fix labels or widen scheduling rules intentionally
- 2.If the nodes should host the DaemonSet, either apply the missing labels or adjust the selector logic to match reality.
- 3.Add tolerations where appropriate
- 4.A correct selector still will not place Pods onto tainted nodes unless the DaemonSet tolerates them.
Prevention
- Treat node labels and DaemonSet scheduling rules as one coordinated contract
- Validate DaemonSet coverage when adding new node groups or taints
- Prefer clearly documented label conventions over ad hoc labels
- Monitor DaemonSet desired vs current vs ready counts after cluster changes