Introduction

An incident-management migration can assign a new escalation policy in PagerDuty while alerts still trigger the old one. Integrations continue sending events, but incidents notify the retired responder path, one service uses the new escalation chain while another still follows the previous policy, or failures appear only after the old policy is removed because service bindings, event orchestration, and integration routing often change separately.

Treat this as an incident-routing problem instead of a generic PagerDuty outage. Start by checking which service and escalation policy an affected event actually matched, because migrations often validate the new policy assignment in the UI while live events continue following older routing or orchestration rules.

Symptoms

  • A PagerDuty service still triggers the old escalation policy after migration
  • Events arrive, but incidents notify the retired responder chain
  • One service or integration uses the new policy while another still uses the previous one
  • Routing failures begin only after the old escalation policy, team assignment, or integration key is removed
  • The new policy exists, but migrated alerts never invoke it consistently
  • The issue started after changing PagerDuty services, Event Orchestration, or incident routing design

Common Causes

  • The PagerDuty service still has the old escalation policy assigned
  • Event Orchestration rules still route matching events into the previous service or responder path
  • An integration key, global event rule, or upstream tool still targets the retired service
  • Terraform, API automation, or service templates restored the earlier policy assignment
  • One service dependency or event rule was updated while another still maps alerts to the previous policy
  • Validation confirmed the new policy existed but did not verify which service and escalation chain live events actually matched

Step-by-Step Fix

  1. Capture one affected event and record the integration, matched service, escalation policy, and orchestration rule it actually uses, because the live incident path determines which responder chain really receives the alert.
  2. Compare that active incident path with the intended post-migration design, because one stale service binding or routing rule can keep production alerts tied to the retired escalation policy.
  3. Review service settings, escalation policy assignments, Event Orchestration rules, integration keys, and automation templates for references to the old policy or service, because PagerDuty incident routing can be controlled by several layers.
  4. Check each service, event rule, and upstream alert source separately if behavior differs, because migrations often update one alert path while another still lands in the previous responder chain.
  5. Update the authoritative service assignment and event-routing logic so affected incidents match the intended escalation policy, because creating the new policy alone does not retarget existing events.
  6. Trigger a controlled test event and confirm the intended service and escalation policy handle it, because a received event does not prove the right responder path took ownership.
  7. Verify the old escalation policy no longer receives incidents from migrated services or event rules, because split paging paths can remain hidden while both policies stay active.
  8. Review responder permissions, maintenance windows, and event-rule precedence if routing still fails, because the destination can be correct while policy state or rule order still blocks the new path.
  9. Document which team owns service configuration, event routing, and migration validation so future PagerDuty cutovers verify the actual runtime escalation target before retiring the previous policy.