Introduction

A Splunk migration can bring a new indexer cluster online while Universal Forwarders still send data to the old one. Search and ingestion may appear healthy on the destination side, but production hosts keep forwarding to the retired cluster, one deployment app uses the new target while another still keeps earlier tcpout settings, or failures begin only after the old indexers are removed because outputs.conf, deployment app delivery, and indexer discovery often drift separately.

Treat this as a forward-target problem instead of a generic Splunk outage. Start by checking which indexer or discovery service an affected forwarder actually connects to, because migrations often validate the new cluster from Splunk management tools while live hosts continue following older forwarding config.

Symptoms

  • Splunk Universal Forwarder still sends data to the old indexer cluster after migration
  • New indexers are healthy, but production data keeps arriving at the retired cluster
  • One host group uses the new cluster while another still forwards to the previous targets
  • Ingestion failures begin only after the old indexers, load balancer, or certificates are removed
  • The new cluster exists, but migrated forwarders never send to it consistently
  • The issue started after moving Splunk indexers, deployment apps, forwarding tiers, or discovery services

Common Causes

  • outputs.conf still lists the old indexer or load-balanced forward-server targets
  • A deployment server app or local app override keeps restoring the earlier forwarding destination
  • Indexer discovery still points to the old cluster manager or discovery endpoint
  • One host image or app context was updated while another still uses the previous tcpout group
  • Automation, package defaults, or bootstrap scripts restored older Splunk forwarding settings
  • Validation confirmed the new cluster accepted test data but did not verify which target live forwarders actually used

Step-by-Step Fix

  1. Capture one affected forwarder and record the active tcpout target, deployment app source, indexer discovery settings, and local override path it actually uses, because the live forwarding path determines where data really lands.
  2. Compare that active forwarding path with the intended post-migration logging design, because one stale forward target can keep large host groups tied to the retired indexer cluster.
  3. Review outputs.conf, deployment server apps, local app overrides, indexer discovery configuration, and automation templates for references to the old cluster, because Splunk forwarding behavior depends on both local config and centrally delivered apps.
  4. Check each host class, deployment app, and forwarding group separately if behavior differs, because migrations often update one forwarder path while another still uses the previous destination.
  5. Update the authoritative forwarding target and config-delivery source so affected hosts send to the intended cluster, because standing up the new indexers alone does not retarget existing forwarders.
  6. Restart or reload the affected forwarder as needed and confirm the intended cluster now receives a controlled test event, because a running forwarder service does not prove the right indexers own the traffic.
  7. Verify the old cluster no longer receives forwarded data from migrated hosts, because split ingestion paths can remain hidden while both forwarding targets stay reachable.
  8. Review certificates, indexer acknowledgment, network ACLs, and queue backlog if forwarding still fails, because the destination can be correct while transport trust or delivery health still blocks the new path.
  9. Document which team owns forwarding config, deployment app delivery, and migration validation so future Splunk cutovers verify the actual runtime target before retiring the previous cluster.