Introduction
Kubernetes NetworkPolicy blocking traffic occurs when network policies are misconfigured, overly restrictive, or have conflicting rules that prevent legitimate pod-to-pod communication. NetworkPolicy is a Kubernetes resource that controls traffic flow between pods using labels and selectors. By default, pods accept all traffic; when any NetworkPolicy selects a pod, it becomes isolated and only allows traffic matching policy rules. Common causes include incorrect pod selector labels, missing ingress rules for required traffic, egress rules blocking DNS resolution (port 53), policy namespace mismatch, CNI plugin not supporting NetworkPolicy, conflicting policies with different rule sets, missing default-allow policy in policy-enabled namespaces, and CIDR block errors in ipBlock rules. The fix requires understanding NetworkPolicy selection logic, ingress/egress rule evaluation, CNI plugin behavior, and debugging tools. This guide provides production-proven troubleshooting for NetworkPolicy issues across Calico, Cilium, Weave Net, and other CNI plugins.
Symptoms
- Pod-to-pod connection fails with
Connection refusedorConnection timed out curl: (7) Failed to connect to pod-ip port 80: Connection timed out- DNS resolution fails from within pods after applying NetworkPolicy
- External API calls fail from pods (egress blocked)
- Service endpoints unreachable despite correct Service configuration
- Inter-namespace communication broken
kubectl describe networkpolicyshows no matching pods- CNI logs show dropped packets for allowed traffic
- NetworkPolicy appears correct but traffic still blocked
- Ingress controller cannot reach backend pods
Common Causes
- Pod selector doesn't match target pod labels
- Ingress rules missing for required source pods/namespaces
- Egress rules blocking DNS (UDP/TCP 53)
- Namespace selector using wrong namespace labels
- CNI plugin doesn't support NetworkPolicy or not configured
- Policy applied to wrong namespace
- ipBlock CIDR range too restrictive
- Port/protocol mismatch in rules
- Multiple policies creating unexpected intersection
- Default-deny policy without explicit allow rules
Step-by-Step Fix
### 1. Diagnose NetworkPolicy status
Check applied policies:
```bash # List all NetworkPolicies in namespace kubectl get networkpolicy -n namespace
# Output: # NAME POD-SELECTOR AGE # api-policy app=api 5d # db-policy app=database 5d
# Describe specific policy kubectl describe networkpolicy api-policy -n namespace
# Output: # Name: api-policy # Namespace: default # Created on: 2026-03-26 10:00:00 +0000 UTC # Labels: <none> # Annotations: <none> # Spec: # PodSelector: app=api # Ingress: # - From: # - PodSelector: app=frontend # Ports: # - Protocol: TCP # Port: 8080 # Egress: # - To: # - PodSelector: app=database # Ports: # - Protocol: TCP # Port: 5432 # PolicyTypes: # - Ingress # - Egress
# Check which pods match policy kubectl get pods -n namespace -l app=api
# Verify policy selection kubectl get networkpolicy api-policy -n namespace -o jsonpath='{.spec.podSelector.matchLabels}' ```
Check pod network connectivity:
```bash # Test from within source pod kubectl exec -it frontend-pod -n namespace -- curl -v http://api-service:8080/health
# Test direct pod IP kubectl get pod api-pod -n namespace -o wide # Note the POD IP
kubectl exec -it frontend-pod -n namespace -- curl -v http://<api-pod-ip>:8080/health
# If service works but pod IP fails = Service/iptables issue # If both fail = NetworkPolicy or CNI issue
# Test from outside cluster kubectl exec -it api-pod -n namespace -- nc -zv <frontend-pod-ip> 8080
# Check DNS resolution kubectl exec -it api-pod -n namespace -- nslookup kubernetes.default kubectl exec -it api-pod -n namespace -- cat /etc/resolv.conf ```
### 2. Fix pod selector matching
Selector must match pod labels exactly:
```yaml # WRONG: Selector doesn't match pod labels apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: api-policy namespace: default spec: podSelector: matchLabels: app: api-server # Wrong! Pod has app=api policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: role: frontend # Wrong label
# Pod definition: # apiVersion: v1 # kind: Pod # metadata: # name: api-pod # namespace: default # labels: # app: api # Actual label # tier: backend
# CORRECT: Match actual pod labels apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: api-policy namespace: default spec: podSelector: matchLabels: app: api # Match pod's app label policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: tier: frontend # Match frontend's tier label ```
Verify selector matching:
```bash # Test selector against pods kubectl get pods -n default -l app=api
# Use label selector API directly kubectl get pods -n default --selector="app=api,tier=backend"
# Check what policies select a specific pod kubectl get networkpolicy -n default -o json | \ jq -r '.items[] | select(.spec.podSelector.matchLabels.app == "api") | .metadata.name'
# Debug: Show all labels on pods kubectl get pods -n default --show-labels
# Test complex selectors kubectl get pods -n default -l "app in (api, backend)" kubectl get pods -n default -l "app=api,tier!=frontend" ```
### 3. Fix ingress rules
Allow traffic from specific sources:
```yaml # Allow from specific pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: api-ingress namespace: default spec: podSelector: matchLabels: app: api policyTypes: - Ingress ingress: # From pods with specific label - from: - podSelector: matchLabels: app: frontend ports: - protocol: TCP port: 8080
# Allow from specific namespace apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: api-ingress namespace: default spec: podSelector: matchLabels: app: api policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: name: frontend-namespace ports: - protocol: TCP port: 8080
# Allow from pods in specific namespace (combined selector) apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: api-ingress namespace: default spec: podSelector: matchLabels: app: api policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: name: monitoring podSelector: matchLabels: app: prometheus ports: - protocol: TCP port: 9090
# Allow from IP ranges (external traffic) apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: api-ingress namespace: default spec: podSelector: matchLabels: app: api policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10.0.0.0/8 except: - 10.0.0.0/24 # Exclude specific range ports: - protocol: TCP port: 8080 ```
### 4. Fix egress rules for DNS
DNS egress is commonly forgotten:
```yaml # WRONG: Egress blocks DNS apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: api-egress namespace: default spec: podSelector: matchLabels: app: api policyTypes: - Egress egress: - to: - podSelector: matchLabels: app: database ports: - protocol: TCP port: 5432 # Problem: No DNS egress rule! Pods can't resolve service names.
# CORRECT: Include DNS egress apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: api-egress namespace: default spec: podSelector: matchLabels: app: api policyTypes: - Egress egress: # Allow DNS resolution (UDP and TCP) - to: - namespaceSelector: {} # DNS can be in any namespace podSelector: matchLabels: k8s-app: kube-dns ports: - protocol: UDP port: 53 - protocol: TCP port: 53
# Allow database access - to: - podSelector: matchLabels: app: database ports: - protocol: TCP port: 5432
# Allow external HTTPS (for API calls) - to: - ipBlock: cidr: 0.0.0.0/0 except: - 10.0.0.0/8 # Exclude private ranges - 172.16.0.0/12 - 192.168.0.0/16 ports: - protocol: TCP port: 443 ```
DNS egress by CIDR (if kube-dns label unknown):
```yaml # Allow DNS to kube-dns service IP apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-dns namespace: default spec: podSelector: matchLabels: app: api policyTypes: - Egress egress: # Get kube-dns IP: kubectl get svc -n kube-system kube-dns - to: - ipBlock: cidr: 10.96.0.10/32 # kube-dns ClusterIP ports: - protocol: UDP port: 53 - protocol: TCP port: 53
# Or allow all cluster IPs for DNS - to: - ipBlock: cidr: 10.96.0.0/12 # Kubernetes service CIDR ports: - protocol: UDP port: 53 - protocol: TCP port: 53 ```
### 5. Fix CNI plugin issues
Verify CNI supports NetworkPolicy:
```bash # Check CNI plugin kubectl get pods -n kube-system -l k8s-app=calico-node kubectl get pods -n kube-system -l app=cilium kubectl get pods -n kube-system -l name=weave-net
# Calico kubectl get pods -n kube-system -l k8s-app=calico-node -o wide
# Check Calico policy enforcement calicoctl get networkpolicy -o wide calicoctl ipam show
# Cilium cilium status cilium endpoint list cilium policy get
# Check if CNI is enforcing policies kubectl get pods -n kube-system -l k8s-app=calico-node -o jsonpath='{.items[*].spec.containers[*].env[?(@.name=="CALICO_NETWORKING_BACKEND")].value}'
# Common CNI issues:
# 1. CNI not running # Fix: Restart CNI pods kubectl rollout restart daemonset calico-node -n kube-system kubectl rollout restart daemonset cilium -n kube-system kubectl rollout restart daemonset weave-net -n kube-system
# 2. CNI misconfigured # Check CNI config cat /etc/cni/net.d/*.conf
# 3. Policy not propagated # Calico: Check Felix logs kubectl logs -n kube-system -l k8s-app=calico-node -c calico-node | grep -i policy
# Cilium: Check agent logs kubectl logs -n kube-system -l k8s-app=cilium | grep -i policy ```
Calico-specific debugging:
```bash # Install calicoctl curl -o calicoctl -O -L "https://github.com/projectcalico/calico/releases/download/v3.26.0/calicoctl-linux-amd64" chmod +x calicoctl
# Check policy enforcement calicoctl get policies -o wide
# Check endpoint status calicoctl get endpoints -o wide
# Check for policy conflicts calicoctl get networkpolicy --all-namespaces -o yaml
# Trace traffic calicoctl get workloadendpoints -o wide calicoctl get hostendpoints -o wide ```
Cilium-specific debugging:
```bash # Install cilium CLI curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin rm cilium-linux-amd64.tar.gz
# Check cluster status cilium status --wait
# List endpoints cilium endpoint list
# Get policy details cilium policy get
# Check for denied flows cilium monitor --type drop
# Test connectivity cilium connectivity test
# Hubble observability (if enabled) hubble observe --follow ```
### 6. Handle default-deny policies
Default-deny requires explicit allow:
```yaml # Default deny all ingress (isolate namespace) apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-ingress namespace: default spec: podSelector: {} # Empty = all pods policyTypes: - Ingress # No ingress rules = deny all
# Default deny all egress apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-egress namespace: default spec: podSelector: {} policyTypes: - Egress # No egress rules = deny all
# Default deny both apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-all namespace: default spec: podSelector: {} policyTypes: - Ingress - Egress
# Then add explicit allow policies apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-frontend-ingress namespace: default spec: podSelector: matchLabels: app: api policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: app: frontend ports: - protocol: TCP port: 8080
# Allow DNS for all pods (needed with default-deny-egress) apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-dns-egress namespace: default spec: podSelector: {} # All pods need DNS policyTypes: - Egress egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system podSelector: matchLabels: k8s-app: kube-dns ports: - protocol: UDP port: 53 - protocol: TCP port: 53 ```
### 7. Debug multiple policy interactions
Multiple policies create intersection:
```yaml # Policy 1: Allow from frontend apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-frontend namespace: default spec: podSelector: matchLabels: app: api policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: app: frontend ports: - protocol: TCP port: 8080
# Policy 2: Allow from monitoring apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-monitoring namespace: default spec: podSelector: matchLabels: app: api policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: app: prometheus ports: - protocol: TCP port: 9090
# Result: Traffic allowed from BOTH frontend AND prometheus # Policies are UNIONED (either can allow traffic) ```
Debug policy evaluation:
```bash # List all policies affecting a namespace kubectl get networkpolicy -n default -o yaml
# Check which policies select specific pod for policy in $(kubectl get networkpolicy -n default -o jsonpath='{.items[*].metadata.name}'); do echo "=== Policy: $policy ===" kubectl get networkpolicy $policy -n default -o json | \ jq '.spec.podSelector.matchLabels' done
# Simulate policy evaluation # For pod with labels app=api, tier=backend: # Check each policy's podSelector against these labels
# Calico policy trace calicoctl get networkpolicy -o yaml | grep -A20 "podSelector" ```
### 8. Monitor and alert on policy issues
Prometheus metrics for NetworkPolicy:
```yaml # Calico exports metrics # Key metrics: # - felix_int_dataplane_apply_seconds_bucket (policy apply latency) # - felix_int_errors_total # - felix_int_policy_updates_total
# Cilium exports metrics # - cilium_policy_max_generation # - cilium_policy_count_total # - cilium_drop_count_total (reason="Policy denied")
# Grafana alerting rules groups: - name: kubernetes_networkpolicy rules: - alert: NetworkPolicyDenied expr: increase(cilium_drop_count_total{reason="Policy denied"}[5m]) > 100 for: 5m labels: severity: warning annotations: summary: "NetworkPolicy denying traffic"
- alert: PolicyNotEnforced
- expr: cilium_policy_max_generation < kube_pod_info
- for: 10m
- labels:
- severity: warning
- annotations:
- summary: "NetworkPolicy not fully enforced"
`
Network policy audit:
```bash # Audit script for policy coverage #!/bin/bash
echo "=== NetworkPolicy Audit ==="
for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do echo "" echo "Namespace: $ns"
# Count policies policy_count=$(kubectl get networkpolicy -n $ns --no-headers 2>/dev/null | wc -l) echo " Policies: $policy_count"
# Count pods pod_count=$(kubectl get pods -n $ns --no-headers 2>/dev/null | wc -l) echo " Pods: $pod_count"
# Check for pods without policy if [ $policy_count -eq 0 ] && [ $pod_count -gt 0 ]; then echo " WARNING: No policies in namespace with $pod_count pods" fi done
# Check for common misconfigurations kubectl get networkpolicy --all-namespaces -o yaml | \ grep -B5 "egress:" | grep -v "port:" | head -20 ```
Prevention
- Always include DNS egress rules when using egress policies
- Test NetworkPolicy in staging before production
- Use label conventions consistently across namespaces
- Document required traffic flows for each service
- Implement default-deny only after testing allow rules
- Monitor policy denial metrics for unexpected blocks
- Use network policy visualization tools (Cilium Hubble, Calico UI)
- Regular audit of policies against actual traffic patterns
Related Errors
- **Connection timed out**: Traffic blocked by NetworkPolicy
- **Connection refused**: Port not allowed or no listener
- **DNS resolution failed**: DNS egress not permitted
- **Service endpoints unreachable**: Policy blocking pod access
- **Cross-namespace communication failed**: Namespace selector misconfigured