Introduction When a Kubernetes Service has no endpoints, it cannot route traffic to any pods. This causes connection refused errors for any service trying to reach it, breaking microservice communication.

Symptoms - `kubectl get endpoints <service-name>` shows ENDPOINTS = <none> - Connection refused when accessing the service - Service cluster IP returns no response - Pods exist but are not selected by the service - DNS resolves service name but traffic goes nowhere

Common Causes - Pod labels do not match service selector - Pods in different namespace than the service - Pods not in Running/Ready state - Service type misconfigured (e.g., ExternalName pointing to wrong service) - Label typo or case mismatch between service and pods

Step-by-Step Fix 1. **Check service selector vs pod labels**: ```bash kubectl get svc <service-name> -n <namespace> -o jsonpath='{.spec.selector}' kubectl get pods -n <namespace> --show-labels ```

  1. 1.Compare and fix the mismatch:
  2. 2.```bash
  3. 3.# Check what the service expects
  4. 4.kubectl get svc my-svc -o jsonpath='{.spec.selector}'
  5. 5.# Check what pods have
  6. 6.kubectl get pods -l app=my-app -n <namespace> --show-labels

# Fix deployment labels if needed kubectl patch deployment my-app -n <namespace> \ --type='json' -p='[{"op": "replace", "path": "/spec/template/metadata/labels/app", "value": "my-svc"}]' ```

  1. 1.Verify endpoints appear after fix:
  2. 2.```bash
  3. 3.kubectl get endpoints <service-name> -n <namespace>
  4. 4.# Should show: NAME ENDPOINTS AGE
  5. 5.`
  6. 6.Test connectivity:
  7. 7.```bash
  8. 8.kubectl run test-pod --rm -it --image=busybox --restart=Never -- wget -qO- http://<service-name>.<namespace>.svc.cluster.local
  9. 9.`

Prevention - Use Helm or Kustomize to ensure label consistency - Use kubectl label to verify labels before deploying - Monitor endpoint count with Prometheus metrics - Use admission webhooks to validate service/pod label alignment - Document label conventions across teams