Introduction

A ConfigMgr migration can bring the new management point online while clients still send inventory, policy requests, or software deployment status to the old one. Devices look healthy in the console but never receive new deployments, one subnet stays attached to the retired hierarchy, or clients fail only after the legacy site system is removed because site assignment, boundary groups, and client location data often linger past the cutover.

Treat this as a client-assignment problem instead of a generic SCCM health issue. Start by checking which management point, site code, and boundary assignment an affected client actually uses, because migrations often validate the new site server while installed clients continue following previous location data.

Symptoms

  • SCCM or ConfigMgr clients still report to the old management point after migration
  • New deployments or policy changes never reach affected devices
  • One office or boundary group uses the new site while another still uses the old hierarchy
  • Hardware inventory and status messages continue appearing in the retired environment
  • Clients fail only after the old management point is shut down
  • The issue started after moving ConfigMgr roles, site systems, or client management infrastructure

Common Causes

  • Boundary groups still assign affected subnets to the old site system
  • Client site assignment or management point configuration still points to the previous environment
  • Discovery or client push methods are reinstalling older ConfigMgr settings
  • DNS, service location, or HTTPS management point publication still resolves to the retired server
  • One management role moved while software update point or fallback status point settings remained on the old site
  • Validation confirmed the new management point was online but did not verify where existing clients actually checked in

Step-by-Step Fix

  1. Capture one affected client and record the assigned site code, current management point, and recent client location entries, because the live assignment path determines where policy and inventory actually go.
  2. Compare that active client path with the intended post-migration ConfigMgr design, because one stale boundary or assignment rule can keep many devices tied to the retired hierarchy.
  3. Review boundary groups, site assignment settings, client installation properties, management point publication, and DNS references for the old server, because ConfigMgr client routing depends on several overlapping configuration sources.
  4. Check each subnet, VPN range, and branch office separately if only part of the fleet is wrong, because migrations often update one boundary group while another still targets the previous site system.
  5. Update the authoritative boundary and client-assignment configuration so affected devices discover and use the intended management point, because building a new role server alone does not retarget enrolled clients.
  6. Trigger a controlled client policy refresh and confirm the device now contacts the intended management point and receives current policy, because a healthy local client service does not prove it switched backends.
  7. Verify the old management point no longer receives inventory or policy requests from migrated devices, because split client assignment can remain hidden while both environments stay reachable.
  8. Review PKI certificates, client cache, and co-management settings if devices still drift back, because the target can be correct while authentication or dual-management state still pushes clients elsewhere.
  9. Document which team owns boundary groups, site assignment, and post-cutover validation so future ConfigMgr migrations confirm the actual management point used by clients before retiring the previous one.