Introduction

A Kubernetes cutover can move traffic to a new application service while NGINX Ingress Controller still routes some or all requests to the old backend. The new deployment appears healthy, but live requests keep landing on the retired service, one path or host header follows the new target while another still uses the previous upstream, or failures begin only after the old service is removed because ingress rules, canary annotations, and service selectors often change in different stages.

Treat this as an ingress-to-service routing problem instead of a generic Kubernetes outage. Start by checking which service and endpoint an affected request actually reaches through the ingress controller, because migrations often validate the new deployment directly while live edge traffic continues following older routing objects.

Symptoms

  • NGINX Ingress Controller still routes to the old backend service after cutover
  • The new deployment is healthy, but live traffic still lands on the retired service
  • One host, path, or canary route uses the new backend while another still uses the previous one
  • Failures begin only after the old service or deployment is scaled down or deleted
  • Pods behind the new service are healthy, but edge traffic never reaches them consistently
  • The issue started after moving Kubernetes services, ingress rules, or progressive-delivery settings

Common Causes

  • The Ingress resource still references the old Service name or port
  • Canary, split-routing, or annotation-based ingress rules still send a subset of traffic to the previous backend
  • The Service selector still points at old pods even though the deployment name changed
  • Multiple Ingress objects or conflicting class assignments define different routes for the same host
  • Controller reload lag, stale config, or GitOps rollback restored the earlier routing definition
  • Validation confirmed the new backend responded directly but did not verify which service live ingress traffic actually used

Step-by-Step Fix

  1. Capture one affected request and record the host, path, ingress class, backend service, and endpoint it actually reaches, because the live edge-to-service path determines where user traffic really lands.
  2. Compare that active route with the intended post-cutover design, because one stale ingress rule or service selector can keep production traffic tied to the retired backend.
  3. Review Ingress resources, canary annotations, Service definitions, endpoint selection, and Git-managed manifests for references to the old backend, because ingress routing depends on both controller config and service discovery objects.
  4. Check each host, path, TLS server block, and canary rule separately if behavior differs, because migrations often fix the primary route while another ingress branch still targets the previous service.
  5. Update the authoritative Ingress and Service mapping so affected requests resolve to the intended backend and pod set, because deploying the new application alone does not retarget edge routing.
  6. Reload or resync the controller if needed and run a controlled request through the ingress endpoint, because a healthy Deployment does not prove the right backend answered live traffic.
  7. Confirm the intended service and pod set now receive requests for the affected route, because direct pod health can look correct while ingress still serves the old path.
  8. Verify the old backend no longer receives production traffic from the controller, because split routing can remain hidden while both services stay available.
  9. Document which team owns ingress rules, service selectors, and cutover validation so future Kubernetes migrations verify the actual runtime backend before retiring the previous service.