# Fix Apache mod_proxy_balancer Member Unavailable Error
Your Apache load balancer starts routing all traffic to a single backend member, or returns 503 errors for specific routes. The error log shows:
[Thu Apr 08 10:15:33.456789 2026] [proxy_balancer:warn] [pid 3456] AH01144: No protocol handler was valid for the URL /balancer-manager. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule.[Thu Apr 08 10:16:01.123456 2026] [proxy:error] [pid 3457] (111)Connection refused: AH00957: HTTP: attempt to connect to 10.0.1.20:8080 (backend2) failed
[Thu Apr 08 10:16:01.123457 2026] [proxy_balancer:warn] [pid 3457] AH01136: member backend2 (http://10.0.1.20:8080) has been marked as unusableThe balancer has marked a backend member as unusable and is no longer sending traffic to it.
Understanding Balancer Member States
Apache balancer members can be in these states: - OK: Active and receiving traffic - DRN (Draining): Finishing existing requests, not accepting new ones - DIS (Disabled): Not receiving traffic - HOT (Standby): Only receives traffic if all other members are down - ERR (Error): Marked as unusable due to repeated failures
Step 1: Check Balancer Status
Enable the balancer manager:
<Location "/balancer-manager">
SetHandler balancer-manager
Require ip 127.0.0.1 ::1
</Location>Then check the status:
curl -s http://localhost/balancer-manager | grep -E "backend|Worker|OK|ERR|DIS"The output shows each member's status, load factor, and failure count.
Step 2: Diagnose the Failed Backend
Test connectivity to the marked-down backend:
curl -v http://10.0.1.20:8080/health
telnet 10.0.1.20 8080Common causes: - Backend process crashed or not started - Firewall rule blocking port 8080 - Backend is overloaded and timing out - Network partition between Apache and backend
Step 3: Configure Failure Detection
Apache's balancer does not actively health-check by default. It only marks members as down after a failed request. Add retry and timeout settings:
<Proxy "balancer://mycluster">
BalancerMember http://10.0.1.10:8080 route=backend1 retry=30 timeout=5
BalancerMember http://10.0.1.20:8080 route=backend2 retry=30 timeout=5
BalancerMember http://10.0.1.30:8080 route=backend3 status=+H
ProxySet lbmethod=bybusyness
</Proxy>The retry=30 tells Apache to retry a failed member every 30 seconds. The timeout=5 limits how long Apache waits for a backend response before considering it failed. The status=+H marks backend3 as a hot standby.
Step 4: Add Active Health Checking
Install mod_proxy_hcheck for active health checking:
```apache LoadModule proxy_hcheck_module modules/mod_proxy_hcheck.so
<Proxy "balancer://mycluster"> BalancerMember http://10.0.1.10:8080 hcmethod=GET hcpath=/health hcinterval=10 hcuri=/health BalancerMember http://10.0.1.20:8080 hcmethod=GET hcpath=/health hcinterval=10 ProxySet lbmethod=bybusyness </Proxy> ```
This actively checks each backend every 10 seconds using GET /health. If a backend fails the health check, Apache removes it from the pool before any client request hits it.
Step 5: Sticky Sessions and Recovery
If using sticky sessions, a marked-down backend can cause issues for users with existing session cookies:
```apache <Proxy "balancer://mycluster"> BalancerMember http://10.0.1.10:8080 route=backend1 BalancerMember http://10.0.1.20:8080 route=backend2 ProxySet stickysession=ROUTEID </Proxy>
Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED ```
When a backend comes back online after being marked down, users with old sticky session cookies may still be routed to it. The retry parameter ensures Apache will reactivate the member after the retry interval, at which point sticky sessions work normally again.
Step 6: Manual Member Management
You can manually manage balancer members through the balancer-manager web interface or via API:
# Disable a member via balancer-manager POST
curl -X POST http://localhost/balancer-manager \
-d "w=balancer://mycluster|http://10.0.1.20:8080&nonce=ABC123&dw=Dis&db=Submit"This is useful for graceful maintenance: drain a backend, perform maintenance, then re-enable it.