Introduction
A ServiceNow migration can bring the new instance online while MID Servers still phone home to the old one. Discovery appears to run, but probes execute against the retired tenant, one MID Server uses the new instance while another still registers to the previous environment, or integrations fail only after the old instance is locked down because config.xml, proxy settings, certificates, and local wrapper settings often lag behind the cutover.
Treat this as an integration-path problem instead of a generic ServiceNow outage. Start by checking which instance URL, proxy path, and certificate trust an affected MID Server actually uses at runtime, because migrations often validate the new ServiceNow tenant in the browser while local MID infrastructure continues polling the earlier instance.
Symptoms
- A ServiceNow MID Server still reports to the old instance after migration
- Discovery or orchestration jobs continue registering against the retired tenant
- One MID Server uses the new instance while another still uses the previous environment
- Integrations fail only after the old ServiceNow instance is restricted or retired
- The new instance is healthy, but probes or ECC queue activity never appear there
- The issue started after moving ServiceNow instances, discovery infrastructure, or proxy settings
Common Causes
- MID Server
config.xmlstill contains the old instance URL - Proxy settings or outbound routing still send MID traffic to the previous ServiceNow environment
- Certificates or trust stores still validate the retired instance path
- Credentials, wrapper settings, or automation updated one MID Server group but not another
- Cloned MID Server installations reused stale config from the previous environment
- Validation confirmed the new instance worked interactively but did not verify where live MID polling and ECC activity actually went
Step-by-Step Fix
- Capture one affected MID Server and record the exact instance URL, proxy path, and trust chain it actually uses, because the runtime registration path determines where discovery and orchestration data really lands.
- Compare that active MID Server path with the intended post-migration ServiceNow design, because one stale local config file can keep many probes tied to the retired instance.
- Review
config.xml, wrapper settings, proxy config, certificates, credentials, and installed MID service settings for references to the old tenant, because MID behavior depends on local agent config and platform trust together. - Check each MID Server, network zone, and installation group separately if behavior differs, because migrations often fix one worker pool while another still points to the previous instance.
- Update the authoritative MID Server configuration so affected servers register and poll against the intended ServiceNow instance, because bringing the new tenant online alone does not retarget installed agents.
- Trigger a controlled discovery or orchestration action and confirm the intended instance receives the ECC activity from the affected MID Server, because a running Windows service does not prove the right tenant handles it.
- Verify the old instance no longer receives polls, heartbeats, or probe traffic from migrated MID Servers, because split orchestration paths can remain hidden while both instances stay reachable.
- Review certificates, proxy authentication, and local service settings if registration still fails, because the destination can be correct while trust or outbound access still blocks the new path.
- Document which team owns MID packaging, proxy policy, and migration validation so future ServiceNow cutovers verify the actual instance each MID Server reports to before retiring the previous tenant.