Introduction

Kubernetes evicts Pods under memory pressure to keep the node alive. That means an evicted Pod is often a symptom of node-level resource exhaustion, not just an application crash. The kubelet chooses which Pods to evict based partly on QoS class and resource guarantees, so workloads without realistic requests and limits tend to get hurt first.

Symptoms

  • Pods show status Evicted
  • Node conditions report MemoryPressure
  • Pod events mention low memory on the node
  • Workloads restart or disappear even though the application container itself did not log a normal failure

Common Causes

  • The node genuinely does not have enough allocatable memory for current workload density
  • Pods run without meaningful memory requests and are treated as easy eviction targets
  • Limits are too low or too high relative to real workload behavior
  • Memory spikes or leaks push the node past kubelet eviction thresholds

Step-by-Step Fix

  1. 1.Check node memory pressure and recent events
  2. 2.Confirm the node, not just the Pod, is the primary failure domain.
bash
kubectl describe node my-node
  1. 1.Review the evicted Pod’s requests, limits, and QoS class
  2. 2.Pods without requests often end up as BestEffort and are evicted first.
  3. 3.Set realistic memory requests and limits
  4. 4.Requests should reflect what the workload normally needs, while limits should protect the node without guaranteeing constant OOM churn.
yaml
resources:
  requests:
    memory: 256Mi
  limits:
    memory: 512Mi
  1. 1.Reduce node pressure or add capacity
  2. 2.If the node is simply oversubscribed, resource tuning alone will not solve the eviction pattern.

Prevention

  • Put real memory requests on all important workloads
  • Monitor node memory pressure, not only Pod restart counts
  • Keep enough node headroom for bursty or cache-heavy services
  • Investigate repeated evictions as platform health issues, not just app issues