Introduction
Laravel queue workers are long-running PHP processes that handle jobs sequentially. Unlike web requests where PHP starts fresh each time, workers accumulate memory over time from unclosed resources, large job payloads, ORM model caching, and memory leaks in dependencies. When memory exceeds the PHP limit, the worker crashes and jobs are retried, potentially creating a cascading failure.
Symptoms
Allowed memory size of 134217728 bytes exhaustedin worker logs- Queue worker crashes and restarts repeatedly
- Jobs fail after processing many successfully
horizon:statusshows workers dying and respawning- Memory usage grows linearly over time with each processed job
[2024-01-15 10:30:00] processing: App\Jobs\ProcessImport
[2024-01-15 10:30:05] processed: App\Jobs\ProcessImport
[2024-01-15 10:30:05] processing: App\Jobs\SendNotification
[2024-01-15 10:30:06] failed: App\Jobs\SendNotification
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted
(tried to allocate 67108864 bytes) in /app/vendor/laravel/framework/src/Illuminate/Queue/Worker.phpCommon Causes
- Large Eloquent models loaded and not freed
- Job payload containing large data structures
- Memory leaks in third-party packages
- Worker processing too many jobs without restart
- Event listeners accumulating in memory
Step-by-Step Fix
- 1.Configure worker memory limit and job threshold:
- 2.```bash
- 3.# Run worker with memory limit (in MB)
- 4.php artisan queue:work --memory=512
# Or in supervisor config # /etc/supervisor/conf.d/laravel-worker.conf [program:laravel-worker] process_name=%(program_name)s_%(process_num)02d command=php /var/www/html/artisan queue:work redis --sleep=3 --tries=3 --memory=512 --timeout=120 numprocs=4 ```
- 1.Use --max-jobs to recycle workers:
- 2.```bash
- 3.# Restart worker after processing N jobs to free accumulated memory
- 4.php artisan queue:work --max-jobs=100
# Or restart after a time limit (seconds) php artisan queue:work --max-time=3600
# Combined: restart after 100 jobs OR 1 hour php artisan queue:work --max-jobs=100 --max-time=3600 ```
- 1.Reduce job payload size:
- 2.```php
- 3.// WRONG - pass entire model or large data
- 4.class ProcessImport implements ShouldQueue {
- 5.use Dispatchable, InteractsWithQueue, SerializesModels;
public function __construct( public array $hugeData // Could be megabytes of data ) {} }
// CORRECT - pass ID and fetch in handler class ProcessImport implements ShouldQueue { use Dispatchable, InteractsWithQueue, SerializesModels;
public function __construct( public int $importId // Just an integer ) {}
public function handle(): void { $import = Import::find($this->importId); // Process in chunks $import->records()->chunk(500, function ($records) { foreach ($records as $record) { $this->processRecord($record); } // Free memory between chunks gc_collect_cycles(); }); } } ```
- 1.Configure Laravel Horizon for better memory management:
- 2.```php
- 3.// config/horizon.php
- 4.'environments' => [
- 5.'production' => [
- 6.'supervisor-1' => [
- 7.'maxProcesses' => 10,
- 8.'balanceMaxShift' => 1,
- 9.'balanceCooldown' => 3,
- 10.'memory' => 256, // Restart if memory exceeds 256MB
- 11.'tries' => 3,
- 12.'timeout' => 120,
- 13.'maxJobs' => 500, // Restart after 500 jobs
- 14.],
- 15.],
- 16.],
- 17.
` - 18.Detect memory leaks in job processing:
- 19.```php
- 20.// In AppServiceProvider
- 21.public function boot(): void {
- 22.Queue::after(function (JobProcessed $event) {
- 23.$memory = memory_get_usage(true) / 1024 / 1024;
- 24.if ($memory > 200) { // Alert if over 200MB
- 25.Log::warning("Queue worker memory high: {$memory}MB", [
- 26.'job' => get_class($event->job),
- 27.'memory_mb' => round($memory, 1),
- 28.]);
- 29.}
- 30.});
- 31.}
- 32.
`
Prevention
- Set
--memoryflag to 75% of PHP's memory_limit - Use
--max-jobsto periodically recycle workers - Process large datasets in chunks with
chunk()orcursor() - Call
gc_collect_cycles()after processing large data - Monitor worker memory in Horizon dashboard
- Use
SerializesModelstrait to serialize only model IDs, not full models - Set appropriate
timeoutto kill stuck jobs before they consume all memory - Use
--restflag to add a pause between jobs for memory cleanup