Introduction

A CI migration can move pipelines to a new GitLab instance while GitLab Runner still polls the old coordinator for jobs. Runners stay online, but work keeps executing from the retired platform, one runner group uses the new instance while another still checks the previous URL, or failures begin only after the old GitLab server or token is removed because runner registration, config.toml, and fleet automation often drift separately.

Treat this as a runner-registration problem instead of a generic CI outage. Start by checking which GitLab instance an affected runner actually contacts for job pickup, because migrations often validate the new platform in the UI while live executors continue polling older runner configuration.

Symptoms

  • GitLab Runner still pulls jobs from the old instance after migration
  • Runners appear healthy, but jobs keep executing from the retired GitLab platform
  • One runner or executor group uses the new instance while another still uses the previous one
  • Job pickup failures begin only after the old GitLab server, token, or certificate is removed
  • The new instance is healthy, but migrated runners never accept its jobs consistently
  • The issue started after moving GitLab, runner fleets, or executor management workflows

Common Causes

  • The runner is still registered against the old GitLab instance URL
  • config.toml or a containerized runner config mount still contains the previous coordinator address
  • Automation, autoscaling images, or bootstrap scripts keep reenrolling runners with the retired instance
  • The runner entry was updated in the new platform, but the actual runner registration target on the host was not changed
  • One executor pool or autoscaling manager was updated while another still polls the earlier coordinator
  • Validation confirmed the new GitLab instance accepted manual registration but did not verify which coordinator live runners actually contacted

Step-by-Step Fix

  1. Capture one affected runner and record the coordinator URL, runner token scope, executor type, and config source it actually uses, because the live registration path determines where job polling really lands.
  2. Compare that active runner path with the intended post-migration CI design, because one stale runner registration can keep whole executor pools tied to the retired GitLab instance.
  3. Review config.toml, registration scripts, autoscaling images, environment variables, and runner-management automation for references to the old instance, because GitLab Runner targeting depends on both local config and fleet bootstrap behavior.
  4. Check each executor pool, runner manager, and autoscaling path separately if behavior differs, because migrations often update one runner group while another still polls the previous coordinator.
  5. Update the authoritative runner registration and bootstrap configuration so affected runners poll the intended GitLab instance, because adding runners to the new platform alone does not retarget existing executors.
  6. Re-register or restart the affected runner as needed and confirm the intended instance now assigns jobs to it, because a running runner service does not prove the right coordinator owns it.
  7. Verify the old GitLab instance no longer receives job polling from migrated runners, because split CI ownership can remain hidden while both coordinators stay reachable.
  8. Review token validity, certificate trust, and runner scope if pickup still fails, because the destination can be correct while access or registration policy still blocks the new path.
  9. Document which team owns runner bootstrap, registration, and migration validation so future GitLab cutovers verify the actual runtime coordinator before retiring the previous instance.