Introduction
A logging migration can move ingestion to a new Graylog node or cluster while one input still receives traffic on the old node. The new platform appears healthy, but live log streams continue landing on the retired path, one sender reaches the new input while another still targets the previous node, or failures begin only after the old listener is shut down because input bindings, load balancer targets, and sender configuration often change separately.
Treat this as an ingestion-target problem instead of a generic Graylog outage. Start by checking which input and node an affected log sender actually connects to, because migrations often validate the new input from the Graylog UI while real agents continue following older listener addresses.
Symptoms
- A Graylog input still receives logs on the old node after migration
- The new input is healthy, but production logs still land on the retired listener
- One sender or source group uses the new node while another still uses the previous one
- Logging failures begin only after the old node, port binding, or load balancer target is removed
- The new cluster is healthy, but migrated senders never use it consistently
- The issue started after moving Graylog nodes, inputs, load balancers, or sender management paths
Common Causes
- The input still binds only on the old node or old address path
- Load balancer or DNS targets still include the retired Graylog node
- Sidecar-managed collector config or sender output settings still point to the previous listener
- A node-specific input was recreated on the new cluster, but senders still target the earlier endpoint
- Automation or bootstrap config restored older sender or load balancer settings
- Validation confirmed the new input accepted test data but did not verify which node live log senders actually reached
Step-by-Step Fix
- Capture one affected sender and record the output target, input type, listening address, and Graylog node it actually reaches, because the live ingestion path determines where logs really land.
- Compare that active sender path with the intended post-migration logging design, because one stale listener target or load balancer member can keep production logs tied to the retired node.
- Review Graylog input bindings, load balancer config, DNS records, sidecar collector config, and sender output templates for references to the old node, because Graylog ingestion depends on both platform listeners and sender-side targeting.
- Check each input type, sender class, and network path separately if behavior differs, because migrations often update one log stream while another still reaches the previous listener.
- Update the authoritative input endpoint and sender configuration so affected logs land on the intended node or cluster, because creating the new input alone does not retarget existing senders.
- Run a controlled log event and confirm the intended Graylog input and node receive it, because a healthy dashboard does not prove the right listener handled production traffic.
- Verify the old node no longer receives log traffic from migrated senders, because split ingestion paths can remain hidden while both listeners stay available.
- Review TLS trust, firewall rules, and collector backpressure if logs still fail, because the destination can be correct while transport policy or sender health still blocks the new path.
- Document which team owns Graylog inputs, sender config, and migration validation so future logging cutovers verify the actual runtime ingestion target before retiring the previous node.