Introduction
A Splunk migration can bring the new indexer tier online while Universal Forwarders still stream data to the old one. Hosts stay connected, but events keep landing in the retired cluster, one server uses the new indexer group while another still uses the previous destination, or ingestion fails only after the old indexer is removed because outputs.conf, deployment apps, load-balancing groups, and certificate trust often drift separately.
Treat this as a telemetry-routing problem instead of a generic Splunk outage. Start by checking which indexer group, target host, and certificate chain an affected forwarder actually uses in a live connection, because migrations often validate the new Splunk tier centrally while local forwarders continue using older output definitions.
Symptoms
- A Splunk Universal Forwarder still sends data to the old indexer after migration
- Events continue landing in the retired indexer cluster or heavy forwarder path
- One forwarder or server class uses the new indexers while another still uses the previous destination
- Ingestion fails only after the old indexer tier is shut down
- The new Splunk environment is healthy, but migrated hosts never send logs there
- The issue started after moving indexers, deployment server apps, or logging infrastructure
Common Causes
outputs.confstill lists the old indexer group, host, or heavy forwarder- Deployment server apps updated one server class but not another
- Load-balancing groups or failover order still prefer the previous indexer set
- Certificates or trust bundles still validate the retired indexer tier
- Cloned host images or config management keep restoring the old output path
- Validation confirmed the new indexer accepted test events but did not verify where live forwarders actually sent production data
Step-by-Step Fix
- Capture one affected forwarder and record the active output group, destination host list, and certificate trust it actually uses, because the runtime output path determines where telemetry really lands.
- Compare that active forwarding path with the intended post-migration Splunk design, because one stale deployment app can keep many servers tied to the retired indexer tier.
- Review
outputs.conf, deployment apps, server classes, certificate bundles, and load-balancing settings for references to the old indexer environment, because Splunk forwarding depends on local config and centrally deployed policy together. - Check different server classes, forwarder groups, and regions separately if behavior differs, because migrations often fix one deployment path while another still uses the previous indexer set.
- Update the authoritative forwarder configuration so affected hosts stream to the intended indexer group, because building the new Splunk cluster alone does not retarget installed forwarders.
- Generate a controlled test event and confirm the intended indexer receives it from the affected host, because a running forwarder service does not prove the right destination is ingesting data.
- Verify the old indexer tier no longer receives telemetry from migrated forwarders, because split logging paths can remain hidden while both clusters stay reachable.
- Review certificates, deployment polling, and ACK or load-balancing behavior if forwarding still fails, because the destination can be correct while trust or client policy still blocks the new route.
- Document which team owns deployment apps, indexer-group policy, and migration validation so future Splunk cutovers verify the actual runtime forwarding target before retiring the previous cluster.