Introduction
A storage migration can present the new SAN successfully while hosts still log in to the old iSCSI target. Volumes appear mounted, but data comes from the previous array, rescans keep rediscovering retired portals, or failover behaves unpredictably because initiator discovery and multipath settings still point at the original storage environment.
Treat this as a storage-path problem instead of a generic filesystem or application issue. Start by checking which iSCSI target portal and IQN the affected host actually uses, because migrations often present new LUNs and update application cutover steps while the initiator remains bound to the old SAN.
Symptoms
- Hosts still connect to the old iSCSI target or SAN after migration
- Storage appears online, but data does not reflect the intended new backend
- Multipath sessions include old and new portals at the same time
- One host or cluster node uses the new storage while another still logs in to the retired array
- After the old SAN is disabled, mounts fail or devices disappear unexpectedly
- The issue started after moving block storage, target portals, or virtualization infrastructure
Common Causes
- The initiator still has the old target portal, discovery address, or static node record configured
- Multipath configuration still prefers the previous SAN path group or portal priority
- Old target IQNs or CHAP settings remain stored and reconnect automatically after reboot
- One cluster node or template was updated while another still uses the retired storage target
- DNS aliases, storage VLAN changes, or target portal IPs still resolve to the previous array
- Validation confirmed new LUN presentation but did not verify which target each host actually logged in to
Step-by-Step Fix
- Capture the active iSCSI sessions on an affected host and record the connected target portal, IQN, and backing devices, because the live storage path determines which array the system is really using.
- Compare that active target with the intended post-migration SAN design, because one stale discovery record can reconnect the host to the retired array even after new storage was presented.
- Review initiator discovery databases, static node definitions, CHAP settings, multipath policy, and cluster templates for references to the old SAN, because iSCSI state often persists across reboots and cutovers.
- Check every node in the cluster or host group separately if behavior is inconsistent, because one server with stale discovery records can keep shared workloads attached to the old storage path.
- Update the authoritative iSCSI portal and multipath configuration, then rescan or reconnect carefully so the host adopts the intended target without leaving mixed storage sessions behind.
- Confirm the affected host logs in only to the intended target portals and that the expected volumes map to the new backend, because seeing block devices online does not prove they come from the right SAN.
- Verify the old SAN no longer receives initiator logins from migrated hosts, because storage migrations can remain partially hidden until the retired array is removed.
- Review filesystem mounts, clustering software, and application device mappings if storage still looks wrong, because hosts can switch targets while higher layers remain pinned to stale device identifiers.
- Document who owns initiator templates, multipath standards, and migration validation so future storage cutovers verify the real connected target before decommissioning the previous array.