Introduction

An application migration can move the public domain successfully while the reverse proxy still sends traffic to the old app server. DNS may already point to the new platform and the proxy itself may be running on the correct host, but its upstream definition, backend pool, or included config still forwards requests to the previous application node.

Treat this as an upstream-target problem instead of a generic site outage. Start by checking what backend the proxy is actually forwarding to, because migrations often update the visible entry point while the proxy layer quietly keeps the old app server in service behind it.

Symptoms

  • The public site resolves correctly, but responses still come from the old application server
  • One hostname or route shows old content while the new server works when accessed directly
  • Proxy logs show requests being forwarded to a legacy private IP, hostname, or backend pool member
  • Restarting the app on the new server does not change what users see through the proxy
  • Some paths work correctly while one proxied app block still reaches the previous environment
  • The issue started after app migration, proxy cutover, backend replacement, or load balancer cleanup

Common Causes

  • The reverse proxy upstream still lists the old app server address
  • An included config file or backend pool definition was never updated after migration
  • Only DNS or the frontend proxy moved while the upstream target stayed on the previous host
  • A fallback upstream or default backend still points to legacy infrastructure
  • The new app server was added, but the old backend was never removed from the proxy routing path
  • Teams validated public routing but not the proxy-to-app hop after cutover

Step-by-Step Fix

  1. Confirm that the wrong response appears only through the reverse proxy and compare it with direct access to the intended new app server, because that isolates the proxy-to-backend hop as the actual failure point.
  2. Identify the active reverse proxy configuration for the affected site, route, or app block and record every upstream target it uses, because one included file or backend pool can preserve the old server even when the main config looks correct.
  3. Check whether the proxy forwards by hostname, private IP, container name, or load balancer pool membership, because each method can keep pointing at the previous app server after migration.
  4. Compare the active upstream targets with the intended post-migration backend design, because the new application path may have changed at the host, port, or service-discovery level.
  5. Update the real upstream definition and remove the retired app server from the active backend path only after confirming the new server is healthy, because partial changes can create a second outage.
  6. Reload the reverse proxy and retest the exact hostname or route that exposed the problem, because editing config without activating it will not change live traffic.
  7. Verify in proxy logs and on the new app server that requests now reach the intended backend, because the true fix is correct proxy-to-app delivery rather than one successful page load.
  8. Review related routes, include files, and fallback backends if the proxy serves more than one app, because migration leftovers often survive in one secondary location.
  9. Document the final upstream mapping and retire obsolete backend entries after recovery, because reverse proxy configs are a common place for old app targets to linger during future migrations.