Introduction
Application Load Balancers reuse backend connections aggressively. If the target server closes an idle keepalive socket before ALB expects it, the next reused request can hit a dead connection and return a 502 even though both ALB and the target look healthy most of the time.
Symptoms
- ALB returns intermittent 502 errors under normal traffic, not only during outages
- Targets stay healthy in the target group while some requests still fail
- Backend logs show closed keepalive connections or broken pipes
- The issue became visible after changing idle timeout or reverse proxy settings
Common Causes
- Backend keepalive timeout is shorter than ALB connection reuse expectations
- A reverse proxy or app server closes idle sockets too aggressively
- Idle timeout changes were applied on one layer but not the other
- Connection reuse behavior differs between target types or server pools
Step-by-Step Fix
- 1.Confirm the 502 pattern and backend connection timing
- 2.Look for intermittent failures that correlate with reused idle connections rather than complete target outages.
aws elbv2 describe-target-health --target-group-arn <tg-arn>- 1.Review backend keepalive and idle timeout settings
- 2.Check the app server, Nginx, or Apache layer behind ALB for shorter keepalive values than the traffic pattern needs.
keepalive_timeout 75s;- 1.Align timeouts so the backend stays open long enough
- 2.The backend should not close reusable idle sockets before ALB is done with them.
aws elbv2 describe-load-balancers --load-balancer-arns <alb-arn>- 1.Retest with fresh traffic after the timeout change
- 2.Watch the ALB 502 rate and backend logs after applying the new keepalive settings.
curl -I https://example.comPrevention
- Review idle timeout and keepalive values as one end-to-end path, not per component
- Document ALB timeout assumptions in backend reverse proxy config
- Retest intermittent 502 patterns after proxy or app server tuning
- Monitor ALB 5xx and backend broken-pipe errors together