You deployed your application and created a Kubernetes Service, but when you try to access it, connections fail. The service shows up in kubectl get svc, but pods behind it are unreachable. Service accessibility issues are common and can stem from missing endpoints, DNS problems, network policies, or configuration errors.

Understanding Kubernetes Services

Services provide stable network endpoints for pods. When a service is not accessible, clients cannot connect despite the service existing. The issue could be at any layer: the service selector not matching pods, pods not being ready, DNS resolution failing, or network policies blocking traffic.

Diagnosis Commands

Check service configuration:

```bash # List services kubectl get svc -n namespace

# Describe service kubectl describe svc service-name -n namespace

# Check service selector kubectl get svc service-name -n namespace -o jsonpath='{.spec.selector}' ```

Verify endpoints:

```bash # Check if service has endpoints kubectl get endpoints service-name -n namespace

# Describe endpoints kubectl describe endpoints service-name -n namespace

# Verify endpoint IPs match pod IPs kubectl get pods -n namespace -o wide ```

Test DNS resolution:

```bash # Test DNS from within cluster kubectl run dns-test --image=busybox --rm -it --restart=Never -- nslookup service-name.namespace

# Test FQDN kubectl run dns-test --image=busybox --rm -it --restart=Never -- nslookup service-name.namespace.svc.cluster.local

# Check DNS pod status kubectl get pods -n kube-system -l k8s-app=kube-dns ```

Test connectivity:

```bash # Test service connectivity kubectl run curl-test --image=curlimages/curl --rm -it --restart=Never -- curl http://service-name.namespace:port/

# Test direct pod connectivity kubectl run curl-test --image=curlimages/curl --rm -it --restart=Never -- curl http://pod-ip:port/ ```

Common Solutions

Solution 1: Fix Service Selector Mismatch

Service selector must match pod labels:

```bash # Check service selector kubectl get svc my-service -n namespace -o yaml | grep -A 3 selector

# Check pod labels kubectl get pods -n namespace -o wide --show-labels ```

If labels don't match:

```yaml # Service with wrong selector spec: selector: app: myapp-v1 # Pods have label app=myapp

# Fix selector to match pods spec: selector: app: myapp ```

Or fix pod labels:

bash
# Add missing label to pods
kubectl label pod pod-name app=myapp -n namespace

Solution 2: Fix Missing Endpoints

If service has no endpoints:

```bash # Check endpoints kubectl get endpoints my-service -n namespace

# If empty, check pod readiness kubectl describe pods -n namespace | grep -A 5 "Readiness" ```

Pods must be ready to become endpoints:

```yaml # Pods might have failing readiness probe readinessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 5 periodSeconds: 10

# If probe is too aggressive, adjust it readinessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 30 # More time to become ready periodSeconds: 10 failureThreshold: 3 ```

Solution 3: Fix Service Port Configuration

Verify service port mapping:

bash
# Check service ports
kubectl get svc my-service -n namespace -o jsonpath='{.spec.ports}'

Fix port configuration:

```yaml # Wrong port configuration spec: ports: - port: 80 targetPort: 8080 # Should match container port

# Verify container port kubectl get pods -n namespace -o jsonpath='{.spec.containers[0].ports}'

# Fix targetPort spec: ports: - port: 80 targetPort: 80 # Matches container port protocol: TCP ```

Solution 4: Fix DNS Issues

If DNS resolution fails:

```bash # Check CoreDNS/KubeDNS pods kubectl get pods -n kube-system -l k8s-app=kube-dns kubectl logs -n kube-system -l k8s-app=kube-dns

# Check DNS service kubectl get svc -n kube-system kube-dns kubectl describe svc -n kube-system kube-dns ```

Test DNS configuration:

```bash # Check pod DNS config kubectl get pod pod-name -n namespace -o jsonpath='{.spec.dnsPolicy}' kubectl get pod pod-name -n namespace -o jsonpath='{.spec.dnsConfig}'

# Create test pod with DNS debug kubectl run dns-debug --image=busybox --rm -it --restart=Never -- sh # Inside pod: cat /etc/resolv.conf nslookup kubernetes.default ```

Fix DNS if needed:

yaml
# Custom DNS configuration
spec:
  dnsPolicy: None
  dnsConfig:
    nameservers:
    - 10.96.0.10
    searches:
    - namespace.svc.cluster.local
    - svc.cluster.local
    - cluster.local
    options:
    - name: ndots
      value: "2"

Solution 5: Fix Network Policy Blocking Traffic

Network policies may block service access:

bash
# List network policies
kubectl get networkpolicies -n namespace
kubectl describe networkpolicy policy-name -n namespace

If policies exist, verify they allow your traffic:

```yaml # Policy that might block traffic spec: podSelector: {} policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: access: allowed # Only pods with this label can connect

# Fix policy to allow service access spec: podSelector: matchLabels: app: myapp policyTypes: - Ingress ingress: - from: - namespaceSelector: {} - podSelector: {} ports: - protocol: TCP port: 8080 ```

Solution 6: Fix External Access Issues

For services exposed externally:

```bash # NodePort service kubectl get svc my-service -n namespace # Check NodePort value and node IPs

# LoadBalancer service kubectl describe svc my-service -n namespace # Check LoadBalancer status and external IP

# Ingress kubectl get ingress -n namespace kubectl describe ingress ingress-name -n namespace ```

Fix NodePort access:

```bash # Verify node has external IP kubectl get nodes -o wide

# Check if firewall allows NodePort # NodePort range: 30000-32767 by default curl http://node-ip:node-port/ ```

Fix LoadBalancer:

```bash # If external IP is pending, check cloud provider kubectl describe svc my-service -n namespace | grep -A 5 Events

# May need to wait for cloud provider allocation # Or use spec.loadBalancerIP for specific IP spec: type: LoadBalancer loadBalancerIP: "192.168.1.100" ```

Solution 7: Fix Headless Service Issues

Headless services (ClusterIP: None) have different behavior:

```bash # Check if headless service kubectl get svc my-service -n namespace -o jsonpath='{.spec.clusterIP}'

# Should return "None" for headless ```

For headless services, DNS returns individual pod IPs:

```bash # Test DNS for headless service kubectl run dns-test --image=busybox --rm -it --restart=Never -- nslookup my-service.namespace

# Should return multiple A records for each pod ```

Headless service configuration:

yaml
spec:
  clusterIP: None  # Makes it headless
  selector:
    app: myapp
  ports:
  - port: 8080
    targetPort: 8080

Solution 8: Fix Service Account and RBAC Issues

Service discovery may need RBAC permissions:

```bash # Check if pods can access API for service discovery kubectl auth can-i get services -n namespace --as=system:serviceaccount:namespace:default

# Check role bindings kubectl get rolebindings -n namespace kubectl describe rolebinding binding-name -n namespace ```

Create necessary RBAC:

yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: namespace
  name: service-reader
rules:
- apiGroups: [""]
  resources: ["services", "endpoints"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  namespace: namespace
  name: read-services
subjects:
- kind: ServiceAccount
  name: default
  namespace: namespace
roleRef:
  kind: Role
  name: service-reader
  apiGroup: rbac.authorization.k8s.io

Verification

After fixing service issues:

```bash # Test DNS resolution kubectl run test --image=busybox --rm -it --restart=Never -- nslookup service-name.namespace

# Test service connectivity kubectl run test --image=curlimages/curl --rm -it --restart=Never -- curl -v http://service-name.namespace:port/

# Verify endpoints exist kubectl get endpoints service-name -n namespace

# Check logs of accessing pods kubectl logs client-pod -n namespace ```

Common Service Accessibility Issues

IssueSymptomsSolution
Selector mismatchEndpoints emptyFix selector or pod labels
Pods not readyEndpoints missing IPsFix readiness probe
Wrong targetPortConnection refusedMatch container port
DNS failureName not resolvedCheck DNS pods/config
Network policyConnection blockedAllow traffic in policy
Pending LoadBalancerExternal IP pendingCheck cloud provider

Service accessibility requires the chain to be complete: correct selector matches pods, pods are ready, DNS resolves the name, and traffic reaches the pods. Debug each layer systematically.