Introduction

A Kubernetes cutover can register the new cluster in Argo CD while applications still sync to the old one. Git changes continue to reconcile, but workloads appear in the retired environment, one application lands in the new cluster while another still targets the previous API server, or deployments fail only after the old cluster is disabled because cluster registrations, application destinations, and project policy often move in separate steps.

Treat this as a deployment-target problem instead of a generic GitOps outage. Start by checking which cluster server and namespace an affected Argo CD application actually syncs to, because migrations often validate the new cluster connection in Argo CD while live applications continue following older destination settings.

Symptoms

  • Argo CD still deploys to the old Kubernetes cluster after cutover
  • Git syncs look healthy, but workloads appear only in the retired cluster
  • One application or project uses the new cluster while another still uses the previous one
  • Deployments fail only after the old cluster API or credentials are removed
  • The new cluster is registered and healthy, but migrated apps never land there
  • The issue started after moving Kubernetes clusters, Argo CD, or GitOps policy

Common Causes

  • The Argo CD application destination still references the old cluster server
  • Project policy or cluster allowlists were updated for one app group but not another
  • A second cluster registration still matches the old API endpoint or name
  • ApplicationSet, Helm values, or Kustomize overlays still render the previous destination
  • Repo templates or automation keep recreating apps with the retired cluster target
  • Validation confirmed the new cluster could sync but did not verify where live applications actually deployed

Step-by-Step Fix

  1. Capture one affected Argo CD application and record the destination cluster, server URL, namespace, and project it actually uses, because the live sync target determines where GitOps changes really land.
  2. Compare that active destination with the intended post-cutover Kubernetes design, because one stale application spec can keep ongoing deployments tied to the retired cluster.
  3. Review cluster registrations, application manifests, ApplicationSet generators, project policy, and repo templates for references to the old cluster, because Argo CD destination control often spans both platform config and Git-managed app definitions.
  4. Check each app, project, and generator path separately if behavior differs, because migrations often fix one application fleet while another still uses the previous cluster target.
  5. Update the authoritative cluster destination and policy configuration so affected applications sync to the intended cluster, because adding the new cluster to Argo CD alone does not retarget existing apps.
  6. Trigger a controlled sync and confirm the intended cluster and namespace receive the workload update, because a successful sync status does not prove the right cluster handled it.
  7. Verify the old cluster no longer receives reconciliations, namespace updates, or new workloads from migrated applications, because split GitOps paths can stay hidden while both clusters remain reachable.
  8. Review cluster credentials, RBAC, and project restrictions if syncs still fail, because the destination can be correct while trust or policy still blocks deployment.
  9. Document which team owns cluster registrations, app templates, and cutover validation so future Argo CD migrations verify the actual deployment target before retiring the previous cluster.