Introduction

A monitoring migration can bring the new Prometheus, Loki, Elasticsearch, or metrics backend online while Grafana dashboards still query the old stack. Panels continue loading, but they show stale data from the retired environment, one folder uses the new backend while another still points to the previous data source, or dashboards fail only after the old monitoring tier is removed because data source URLs, provisioning files, and dashboard references often drift separately.

Treat this as an observability-query path problem instead of a generic Grafana outage. Start by checking which data source endpoint an affected panel actually uses at runtime, because migrations often validate the new monitoring stack in Grafana admin settings while live dashboards continue querying older sources.

Symptoms

  • A Grafana data source still points to the old monitoring stack after migration
  • Dashboards load, but they show stale or unexpected data from the retired backend
  • One dashboard or folder uses the new stack while another still queries the previous one
  • Panels fail only after the old monitoring system is restricted or removed
  • The new backend is healthy, but migrated dashboards never use it
  • The issue started after moving Grafana, Prometheus, Loki, Elasticsearch, or another monitoring tier

Common Causes

  • The Grafana data source URL still references the old monitoring endpoint
  • Provisioning files or config management keep restoring the previous data source definition
  • Folder-level permissions or org scoping updated one area but not another
  • Dashboard variables, panel queries, or mixed data source settings still prefer the old backend
  • A cloned data source exists with a similar name, and dashboards still bind to the retired one
  • Validation confirmed the new stack ingested data but did not verify which source live dashboards actually queried

Step-by-Step Fix

  1. Capture one affected dashboard panel and record the exact data source, backend URL, and org or folder context it actually uses, because the live query path determines where observability data really comes from.
  2. Compare that active query path with the intended post-migration monitoring design, because one stale data source or dashboard binding can keep many teams viewing the retired backend.
  3. Review Grafana data source settings, provisioning files, dashboard JSON, variables, and any config-management templates for references to the old monitoring stack, because Grafana query routing depends on both admin-defined sources and dashboard-level bindings.
  4. Check each organization, folder, and dashboard set separately if behavior differs, because migrations often fix one visualization path while another still uses the previous backend.
  5. Update the authoritative data source and dashboard references so affected panels query the intended monitoring stack, because building the new backend alone does not retarget existing Grafana content.
  6. Run a controlled query and confirm the intended backend serves fresh data to the affected dashboard, because a panel rendering successfully does not prove the right monitoring system answered it.
  7. Verify the old backend no longer receives dashboard queries from migrated users, because split observability paths can remain hidden while both stacks stay reachable.
  8. Review credentials, proxy settings, and org permissions if queries still fail, because the destination can be correct while access or routing still blocks the new path.
  9. Document which team owns Grafana provisioning, dashboard standards, and migration validation so future monitoring cutovers verify the actual runtime data source before retiring the previous stack.