Introduction
An observability migration can bring the new tracing platform online while applications keep exporting traces to the old collector. Dashboards in the new system stay empty, service maps look incomplete, and teams suspect instrumentation gaps when the real problem is that the OpenTelemetry exporter or local collector still forwards spans to the retired endpoint.
Treat this as a telemetry-routing problem instead of a generic monitoring outage. Start by checking which collector endpoint the running service actually exports to, because tracing migrations often update dashboards and credentials first while the live exporter configuration stays pinned to the previous backend.
Symptoms
- Traces are missing from the new observability platform after migration
- The old collector or tracing backend still receives fresh span data
- One service appears in the new platform while another still exports to the previous collector
- Local instrumentation looks healthy, but no end-to-end traces arrive in the target platform
- Exporter errors start after moving collectors, regions, or observability vendors
- The issue started after changing OpenTelemetry collectors, SaaS tracing endpoints, or agent routing
Common Causes
- The OTLP exporter endpoint still points to the old collector or backend URL
- Environment variables such as
OTEL_EXPORTER_OTLP_ENDPOINTstill override the intended collector address at runtime - A local or sidecar collector still forwards telemetry to the previous destination over OTLP gRPC or HTTP
- One signal path was migrated while traces still use the legacy route
- DNS aliases, proxies, or service mesh rules still direct exporter traffic to the old backend
- Validation checked the new dashboards but not the live exporter destination used by the application
Step-by-Step Fix
- Capture one trace from an affected service and record the exact exporter or collector endpoint used at runtime, because the active telemetry route matters more than the config value you expected to deploy.
- Compare that live destination with the intended post-migration collector or tracing backend, because one stale endpoint can divert every span away from the new platform.
- Review application environment variables, exporter settings, collector pipelines, and deployment templates for any remaining reference to the old trace destination, because telemetry routing often lives in multiple layers.
- Check whether a sidecar, daemonset, or local collector still forwards spans to the retired backend, because updating only the app-level exporter may not change the final trace path.
- Update the authoritative exporter or collector configuration and reload the affected service, because traces will keep following the old route until the running process picks up the new destination.
- Generate a controlled test trace and verify it appears in the intended platform with the expected service name and timestamps, because a healthy process does not prove telemetry reaches the right backend.
- Confirm the old collector stops receiving new spans from the migrated services, because dual export or silent fallback can leave observability split across two systems.
- Review logs and metrics exporters if they use related configuration but separate destinations, because one signal can be migrated while another still uses the legacy observability path.
- Document who owns exporter settings, collector pipelines, and cutover validation for future migrations, because telemetry routing issues are easy to miss during platform changes.