Your pods can't communicate, and you suspect NetworkPolicy is blocking the traffic. NetworkPolicies control pod-to-pod communication, but misconfigured rules can unexpectedly isolate pods or block legitimate traffic. Debugging network policies requires understanding how rules apply and testing actual connectivity.

Understanding NetworkPolicy Enforcement

NetworkPolicies specify how pods communicate with each other and other network endpoints. They're additive - multiple policies combine to restrict traffic. If no policy selects a pod, it can communicate freely. If any policy selects a pod, only allowed traffic works.

NetworkPolicies require a CNI plugin that supports them (Calico, Cilium, Weave, etc.). Basic Kubernetes networking doesn't enforce policies by default.

Diagnosis Commands

Start by checking NetworkPolicy configuration:

```bash # List NetworkPolicies in namespace kubectl get networkpolicies -n namespace kubectl get netpol -n namespace # Short name

# Describe specific policy kubectl describe networkpolicy policy-name -n namespace

# Get all policies that might affect a pod kubectl get networkpolicies -n namespace -o wide ```

Check which policies select your pod:

```bash # Get pod labels kubectl get pod pod-name -n namespace --show-labels

# Check policies' pod selectors kubectl get networkpolicies -n namespace -o jsonpath='{.items[*].spec.podSelector}' ```

Test connectivity:

```bash # Test pod-to-pod connectivity kubectl exec -it source-pod -n namespace -- curl target-pod-ip:port

# Test service connectivity kubectl exec -it source-pod -n namespace -- curl service-name:port

# Test with nc (netcat) kubectl exec -it source-pod -n namespace -- nc -zv target-ip port

# Test DNS resolution kubectl exec -it source-pod -n namespace -- nslookup service-name.namespace ```

Common Solutions

Solution 1: Check Policy Pod Selector

Policies only affect pods matching the selector:

```bash # Check policy pod selector kubectl get networkpolicy policy-name -n namespace -o yaml | grep -A 5 podSelector

# Verify pod labels match kubectl get pods -n namespace --show-labels

# Check if pod is selected kubectl get pods -n namespace -l app=target-app --show-labels ```

Fix pod selector mismatch:

```yaml # NetworkPolicy with incorrect selector spec: podSelector: matchLabels: app: wrong-app # Doesn't match target pods

# Fix: Update selector spec: podSelector: matchLabels: app: correct-app # Matches target pods ```

Add missing labels to pods:

bash
# Label pod to match policy selector
kubectl label pod pod-name -n namespace app=target-app

Solution 2: Fix Ingress Rules

Ingress rules control incoming traffic to selected pods:

```bash # Check ingress rules kubectl get networkpolicy policy-name -n namespace -o yaml | grep -A 30 ingress

# Test if ingress is blocking kubectl exec -it source-pod -n source-namespace -- curl target-pod-ip:port ```

Fix ingress configuration:

```yaml # Allow specific pods to connect spec: ingress: - from: - podSelector: matchLabels: role: frontend # Only pods with this label can connect ports: - protocol: TCP port: 8080

# Allow all pods in namespace spec: ingress: - from: - podSelector: {} # Empty selector = all pods in same namespace

# Allow specific namespace pods spec: ingress: - from: - namespaceSelector: matchLabels: environment: production ports: - protocol: TCP port: 8080

# Allow both pod and namespace selector spec: ingress: - from: - namespaceSelector: matchLabels: project: myproject podSelector: matchLabels: role: backend

# Allow external traffic (no from rules) spec: ingress: - ports: - protocol: TCP port: 80 ```

Solution 3: Fix Egress Rules

Egress rules control outgoing traffic from selected pods:

```bash # Check egress rules kubectl get networkpolicy policy-name -n namespace -o yaml | grep -A 30 egress

# Test if egress is blocking kubectl exec -it source-pod -n namespace -- curl external-ip:port ```

Fix egress configuration:

```yaml # Allow specific destinations spec: egress: - to: - podSelector: matchLabels: role: database ports: - protocol: TCP port: 5432

# Allow DNS resolution (required for most apps) spec: egress: - to: - namespaceSelector: {} podSelector: matchLabels: k8s-app: kube-dns # CoreDNS ports: - protocol: UDP port: 53 - protocol: TCP port: 53

# Allow external traffic spec: egress: - to: - ipBlock: cidr: 0.0.0.0/0 except: - 10.0.0.0/8 # Block internal IPs if needed ```

Solution 4: Fix Missing DNS Allow Rule

DNS must be allowed for service name resolution:

yaml
# Common mistake: blocking DNS egress
# Fix by adding DNS allow rule
spec:
  egress:
  # Allow DNS
  - to:
    - namespaceSelector: {}
      podSelector:
        matchLabels:
          k8s-app: kube-dns
    ports:
    - protocol: UDP
      port: 53
    - protocol: TCP
      port: 53
  # Allow other traffic
  - to:
    - podSelector:
        matchLabels:
          role: backend

Solution 5: Fix Port Configuration

Rules must specify correct ports:

```bash # Check policy ports kubectl get networkpolicy policy-name -n namespace -o yaml | grep -A 5 ports

# Check pod listening ports kubectl exec -it target-pod -n namespace -- netstat -tuln ```

Fix port configuration:

```yaml # Allow correct ports spec: ingress: - from: - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 8080 # Must match pod listening port

# For multiple ports ports: - protocol: TCP port: 8080 - protocol: TCP port: 9090 ```

Solution 6: Check Multiple Policy Interaction

Multiple policies combine to restrict traffic:

```bash # List all policies affecting pod kubectl get networkpolicies -n namespace -o yaml | grep -B 5 -A 10 "podSelector"

# If multiple policies select a pod, ALL rules apply ```

Debug combined rules:

bash
# Check each policy's rules
for policy in $(kubectl get netpol -n namespace -o name); do
  echo "=== $policy ==="
  kubectl get $policy -n namespace -o yaml | grep -A 30 "spec:"
done

Solution 7: Fix Namespace Selector Issues

Namespace selectors require namespace labels:

```bash # Check namespace labels kubectl get namespace namespace-name --show-labels

# Check policy namespace selector kubectl get networkpolicy policy-name -n namespace -o yaml | grep -A 10 namespaceSelector ```

Label namespaces:

```bash # Label namespace kubectl label namespace target-namespace environment=production

# Verify label kubectl get namespace target-namespace --show-labels ```

Solution 8: Check CNI Plugin Enforcement

NetworkPolicies require CNI plugin support:

```bash # Check CNI plugin kubectl get pods -n kube-system | grep -E "calico|cilium|weave|flannel"

# Check if CNI supports NetworkPolicy # Calico, Cilium, Weave support it # Flannel basic doesn't support NetworkPolicy ```

Verify Calico policy enforcement:

```bash # For Calico kubectl get pods -n kube-system -l k8s-app=calico-node calicoctl get networkpolicy -n namespace

# Check Calico logs kubectl logs -n kube-system -l k8s-app=calico-node | grep -i policy ```

Verify Cilium policy enforcement:

bash
# For Cilium
kubectl get pods -n kube-system -l k8s-app=cilium
cilium status
kubectl exec -n kube-system cilium-pod -- cilium policy get

Solution 9: Create Debugging Policy

Create a policy to test connectivity:

yaml
# Temporary policy to allow all traffic for debugging
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: debug-allow-all
  namespace: namespace
spec:
  podSelector: {}  # All pods in namespace
  ingress:
  - {}  # Allow all ingress
  egress:
  - {}  # Allow all egress
  policyTypes:
  - Ingress
  - Egress

Apply and test:

```bash # Apply debug policy kubectl apply -f debug-allow-all.yaml

# Test connectivity now kubectl exec -it source-pod -n namespace -- curl target-pod-ip:port

# Remove debug policy after testing kubectl delete networkpolicy debug-allow-all -n namespace ```

Solution 10: Use Policy Logging (Cilium/Calico)

Some CNIs provide policy logging:

```bash # Cilium policy trace kubectl exec -n kube-system cilium-pod -- cilium policy trace \ --from-label app=frontend \ --to-label app=backend \ --to-port 8080/TCP

# Calico policy audit logs kubectl logs -n kube-system -l k8s-app=calico-node | grep -i "deny|policy" ```

Verification

After fixing NetworkPolicy issues:

```bash # Test connectivity kubectl exec -it source-pod -n namespace -- curl target-service:port

# Check policy application kubectl describe networkpolicy policy-name -n namespace

# Verify pod-to-pod communication kubectl exec -it pod1 -n namespace -- ping pod2-ip kubectl exec -it pod1 -n namespace -- curl pod2-ip:port

# Check DNS resolution works kubectl exec -it pod -n namespace -- nslookup kubernetes.default ```

NetworkPolicy Testing Script

```bash # Test connectivity matrix #!/bin/bash NAMESPACE="my-namespace" PODS=$(kubectl get pods -n $NAMESPACE -o jsonpath='{.items[*].metadata.name}')

echo "Testing connectivity between pods:" for pod1 in $PODS; do for pod2 in $PODS; do if [ "$pod1" != "$pod2" ]; then echo "From $pod1 to $pod2:" kubectl exec -n $NAMESPACE $pod1 -- timeout 2 curl -s http://$(kubectl get pod $pod2 -n $NAMESPACE -o jsonpath='{.status.podIP}'):8080 || echo "BLOCKED" fi done done ```

NetworkPolicy Blocking Causes Summary

CauseCheckSolution
Pod not selectedkubectl get pods --show-labelsFix podSelector or add labels
Missing ingress rulekubectl describe netpolAdd ingress rules for source pods
Missing egress rulekubectl describe netpolAdd egress rules for destinations
DNS blockedCan't resolve service namesAdd DNS egress rule (port 53)
Wrong portkubectl describe netpolFix port in policy
Namespace selector mismatchkubectl get ns --show-labelsLabel namespaces
CNI doesn't support policyCheck CNI podsUse NetworkPolicy-capable CNI
Multiple policies conflictList all policiesReview combined rules

Prevention Best Practices

Always allow DNS egress (port 53 UDP/TCP) in NetworkPolicy. Use namespace labels for namespace-level policies. Test policies in staging before production. Document all policies and their purpose. Monitor network connectivity after policy changes. Use policy logging when available. Start with restrictive policies and add exceptions as needed.

NetworkPolicy blocking issues usually come down to missing rules, wrong selectors, or DNS being blocked. The key is checking which policies select your pod and verifying that each necessary connection has an allow rule in ingress and/or egress.