Introduction

Laravel queue workers are long-running PHP processes that process jobs sequentially. Unlike request-scoped PHP scripts that are garbage collected after each request, workers accumulate memory over time from job processing -- unclosed database cursors, cached ORM models, loaded file contents, and event listeners. When the worker exceeds its memory limit (default 128MB), Laravel gracefully stops the worker, which must be restarted by the process supervisor. In production with high job throughput, this causes frequent restarts, processing delays, and potential job loss if the restart happens during job execution.

Symptoms

Worker logs show memory warnings:

bash
[2024-03-15 10:23:45] local.INFO: Processing job: App\Jobs\ProcessReport
[2024-03-15 10:23:50] local.INFO: Processed: App\Jobs\ProcessReport
[2024-03-15 10:23:51] local.INFO: Processing job: App\Jobs\SendNotification
[2024-03-15 10:23:52] local.WARNING: Stopping worker. Memory limit exceeded: 128MB

Supervisor logs show frequent restarts:

bash
2024-03-15 10:23:52,123 INFO stopped: laravel-worker_01 (exit status 0)
2024-03-15 10:23:53,456 INFO spawned: 'laravel-worker_01' with pid 12345

Or OOM kill by the OS:

bash
$ dmesg | tail
[98765.432] Out of memory: Killed process 12345 (php) total-vm:2048000kB

Common Causes

  • Processing large datasets in a single job: Loading thousands of records into memory
  • Event listeners accumulating: Each job fires events, listeners are never cleaned up
  • Database query result buffering: Eloquent loads full result sets into memory
  • File processing without chunking: Reading entire CSV/Excel files into memory
  • Memory limit too low: --memory=128 is insufficient for complex jobs
  • Circular references preventing garbage collection: Objects referencing each other block PHP GC

Step-by-Step Fix

Step 1: Configure worker memory limit and restart threshold

ini
; /etc/supervisor/conf.d/laravel-worker.conf
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/myapp/artisan queue:work sqs --sleep=3 --tries=3 --max-time=3600 --memory=256
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=www-data
numprocs=4
redirect_stderr=true
stdout_logfile=/var/log/laravel-worker.log
stopwaitsecs=3600

The --memory=256 tells the worker to gracefully stop when it exceeds 256MB. --max-time=3600 forces a restart every hour regardless of memory usage.

Step 2: Use chunking for large dataset processing

```php // WRONG - loads all records into memory public function handle(): void { $users = User::where('active', true)->get(); // Could be millions foreach ($users as $user) { $this->processUser($user); } }

// CORRECT - uses cursor for lazy loading public function handle(): void { User::where('active', true) ->cursor() // Returns a Generator, one record at a time ->each(function ($user) { $this->processUser($user);

// Clear ORM model cache periodically if (++$this->count % 100 === 0) { gc_collect_cycles(); // Force garbage collection } }); } ```

Step 3: Clean up after each job

```php public function handle(): void { try { // Process the job $this->generateReport(); } finally { // Clean up - runs even if job fails $this->cleanupTempFiles();

// Clear Eloquent's internal model cache \Illuminate\Support\Facades\DB::disconnect();

// Force garbage collection gc_collect_cycles(); } } ```

Step 4: Monitor worker memory in production

```php // In AppServiceProvider boot() Queue::after(function (JobProcessed $event) { $memoryMB = round(memory_get_usage(true) / 1024 / 1024, 1); $jobName = get_class($event->job->resolve());

Log::info("Queue job completed", [ 'job' => $jobName, 'memory_mb' => $memoryMB, 'queue' => $event->job->getQueue(), ]);

if ($memoryMB > 200) { Log::warning("High memory usage after job: {$jobName} ({$memoryMB}MB)"); } }); ```

Prevention

  • Set --memory to 256MB or higher for complex jobs
  • Use cursor() or lazy() instead of get() for large result sets
  • Add --max-time=3600 to force periodic worker restarts
  • Monitor memory usage per job type to identify memory-heavy jobs
  • Use gc_collect_cycles() after processing large batches
  • Disconnect database connections in long-running jobs to release memory
  • Consider splitting large jobs into smaller sub-jobs that process in batches