# Fix Apache MaxRequestWorkers Reached 503 Service Unavailable
Your Apache server starts returning 503 Service Unavailable errors during peak traffic. The error log shows:
[Thu Apr 08 14:23:07.123456 2026] [mpm_prefork:notice] [pid 1234] AH00163: Apache/2.4.58 configured -- resuming normal operations
[Thu Apr 08 14:30:15.654321 2026] [mpm_prefork:error] [pid 1234] AH00161: server reached MaxRequestWorkers setting, consider raising the MaxRequestWorkers settingThis means all available Apache worker processes/threads are busy serving existing requests. New connections are queued until they hit the ListenBackLog limit, at which point Apache refuses them with 503.
Understanding MPM Worker Limits
Apache uses Multi-Processing Modules (MPM) to handle concurrent connections. Check which MPM is active:
apache2ctl -V | grep -i mpm
# or
httpd -V | grep -i mpmThe three MPMs have different capacity formulas:
- prefork:
MaxRequestWorkers= maximum simultaneous requests (1 process per request) - worker:
MaxRequestWorkers=ServerLimit * ThreadLimit(threads handle requests) - event: Same as worker, but with optimized keepalive handling
Calculating the Right MaxRequestWorkers
For prefork, each Apache process consumes significant memory:
ps -ylC apache2 --sort:rss | awk '{sum+=$8; n++} END {printf "Avg process: %dMB, Total: %dMB\n", sum/1024/n, sum/1024}'If each process uses 80MB and you have 4GB available for Apache:
MaxRequestWorkers = 4096 / 80 = 51Round down to leave headroom for the OS. Never set MaxRequestWorkers so high that Apache processes trigger swap usage.
For worker/event, threads are much lighter:
MaxRequestWorkers = ServerLimit * ThreadsPerChildA typical worker MPM configuration:
<IfModule mpm_worker_module>
StartServers 3
MinSpareThreads 75
MaxSpareThreads 250
ThreadLimit 64
ThreadsPerChild 25
MaxRequestWorkers 400
MaxConnectionsPerChild 5000
</IfModule>This gives 400 concurrent connections (16 server processes * 25 threads each) while using far less memory than prefork.
Diagnosing Current Usage
Enable mod_status to see real-time worker utilization:
<Location "/server-status">
SetHandler server-status
Require ip 127.0.0.1 ::1
</Location>Then check:
curl -s http://localhost/server-status?auto | head -20Key metrics:
- Total Accesses: Total requests served since start
- BusyWorkers: Currently serving requests
- IdleWorkers: Waiting for requests
- ConnsTotal: Total connections
If BusyWorkers consistently equals MaxRequestWorkers, you have hit the ceiling.
Handling Traffic Spikes Gracefully
When you cannot raise MaxRequestWorkers further due to memory constraints, use connection queuing:
ListenBackLog 65535This increases the OS-level connection queue, allowing clients to wait in the TCP backlog instead of receiving immediate 503 errors. The queued clients will be served as workers become available.
Combine with timeout optimization:
Timeout 60
KeepAliveTimeout 5Shorter timeouts free workers faster, increasing throughput without raising MaxRequestWorkers.
Monitoring and Alerting
Set up monitoring on the Apache status page:
```bash #!/bin/bash busy=$(curl -s http://localhost/server-status?auto | grep "BusyWorkers" | awk '{print $2}') max=400 pct=$((busy * 100 / max))
if [ $pct -gt 80 ]; then echo "WARNING: Apache workers at ${pct}% capacity ($busy/$max)" fi ```
Alert at 80% capacity so you can scale before hitting the hard limit. If you consistently exceed MaxRequestWorkers, consider adding a reverse proxy (Nginx) in front of Apache to handle static files and connection management, freeing Apache workers for dynamic content only.