Introduction
Apache's mod_proxy_balancer distributes requests across multiple backend servers. When a backend member becomes unavailable, Apache should automatically route traffic to healthy members. However, misconfigured balancer settings can cause all requests to fail even when healthy backends exist. The error log shows:
[Mon Apr 08 22:15:00.123456 2026] [proxy:error] [pid 7890] AH01114: HTTP: failed to make connection to backend: 10.0.1.20
[Mon Apr 08 22:15:00.234567 2026] [proxy_balancer:error] [pid 7890] AH01177: Failed to lookup proxy by workerSymptoms
- Clients receive 503 Service Unavailable even though some backend servers are healthy
- Balancer-manager shows members in "Err" or "Dis" (disabled) state
- Apache does not automatically recover disabled members after they come back online
- All requests go to a single backend while others remain disabled
- Sticky sessions prevent failover when the assigned backend is down
Common Causes
retryparameter set too high, keeping members disabled for too long after failure- Sticky sessions (
stickysession) routing requests to a dead backend without failover - No health check configured, so Apache does not know when a member recovers
nofailover=Onpreventing requests from being redistributed to other members- Member marked with
status=+D(disabled) in balancer-manager and not re-enabled
Step-by-Step Fix
- 1.Check balancer status via balancer-manager. First enable it:
- 2.```apache
- 3.<Location "/balancer-manager">
- 4.SetHandler balancer-manager
- 5.Require ip 127.0.0.1 ::1 10.0.0.0/8
- 6.</Location>
- 7.
` - 8.Access at
http://localhost/balancer-managerto see member status. - 9.Configure proper retry and failover settings:
- 10.```apache
- 11.<Proxy "balancer://mycluster">
- 12.BalancerMember "http://10.0.1.10:8080" retry=5 timeout=10
- 13.BalancerMember "http://10.0.1.20:8080" retry=5 timeout=10
- 14.BalancerMember "http://10.0.1.30:8080" retry=5 status=+H
- 15.ProxySet lbmethod=byrequests
- 16.</Proxy>
ProxyPass "/app" "balancer://mycluster/app"
ProxyPassReverse "/app" "balancer://mycluster/app"
``
The retry=5 means Apache retries a failed member every 5 seconds. The +H` marks a hot standby.
- 1.Configure sticky sessions with failover:
- 2.```apache
- 3.<Proxy "balancer://mycluster">
- 4.BalancerMember "http://10.0.1.10:8080" route=node1 retry=5
- 5.BalancerMember "http://10.0.1.20:8080" route=node2 retry=5
- 6.ProxySet stickysession=ROUTEID nofailover=Off
- 7.</Proxy>
- 8.Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/app" env=BALANCER_ROUTE_CHANGED
- 9.
` - 10.Setting
nofailover=Offensures that when a sticky session member is down, the request falls through to another member. - 11.Enable health checking with
mod_proxy_hcheck(Apache 2.4.21+): - 12.```apache
- 13.ProxyHCTemplate hcmethod=GET hcpath=/health hcinterval=10 hcuri=/health healthcheck-template=hc-tmpl
<Proxy "balancer://mycluster">
BalancerMember "http://10.0.1.10:8080" hcmethod=GET hcpath=/health hcinterval=5
BalancerMember "http://10.0.1.20:8080" hcmethod=GET hcpath=/health hcinterval=5
</Proxy>
``
This actively checks the /health` endpoint every 5 seconds and marks members up or down automatically.
- 1.Re-enable a disabled member via balancer-manager or command line:
- 2.```bash
- 3.curl "http://localhost/balancer-manager?b=mycluster&w=http://10.0.1.20:8080&nonce=YOUR_NONCE&w_status_D="
- 4.
`
Prevention
- Always configure active health checks (
ProxyHCTemplate) for production load balancers - Set
retryto a low value (5-10 seconds) to allow quick recovery of failed members - Monitor balancer member status via the balancer-manager API and alert on members in "Err" state
- Use
bybusynessload balancing method to send fewer requests to slower backends - Document the balancer configuration and member addresses in your runbook
- Test failover regularly by intentionally stopping one backend member and verifying traffic redistribution