Introduction

Laravel's queue workers run as long-running processes that pick up jobs from the queue. Workers can stop processing due to supervisor misconfiguration, memory leaks from accumulated model instances, jobs that fail silently without being marked as failed, and connection timeouts to the queue backend. Without proper configuration, workers consume increasing memory over time, jobs pile up unprocessed, and failed jobs are lost without any record. The fix involves configuring supervisor for process management, setting memory limits for graceful restarts, and implementing proper error handling in job classes.

Symptoms

bash
# Jobs in queue but not being processed
php artisan queue:monitor default
# Output: Queue default has 1500 pending jobs

Or worker memory leak:

bash
[2026-04-09 10:00:00] production.ERROR: Out of memory
# Worker process killed by OOM killer

Or failed jobs lost:

bash
[2026-04-09 10:00:00] production.ERROR: Job failed
# But nothing in failed_jobs table - job silently lost

Common Causes

  • Supervisor not configured: Worker not automatically restarted on crash
  • No memory limit: Worker runs until OOM killed
  • Job timeout too short: Long jobs killed before completion
  • Failed job handler not configured: Failed jobs not recorded
  • Queue connection misconfigured: Worker connected to wrong queue
  • Model instances not freed: Eloquent models accumulate in memory

Step-by-Step Fix

Step 1: Configure Supervisor for queue workers

ini
; /etc/supervisor/conf.d/laravel-worker.conf
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/artisan queue:work redis --sleep=3 --tries=3 --max-time=3600 --memory=512
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=www-data
numprocs=4
redirect_stderr=true
stdout_logfile=/var/log/supervisor/laravel-worker.log
stopwaitsecs=3600
bash
# Reload supervisor
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start laravel-worker:*

Step 2: Configure job class with retries and timeout

```php namespace App\Jobs;

use Illuminate\Bus\Queueable; use Illuminate\Contracts\Queue\ShouldQueue; use Illuminate\Foundation\Bus\Dispatchable; use Illuminate\Queue\InteractsWithQueue; use Illuminate\Queue\SerializesModels;

class ProcessReport implements ShouldQueue { use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

public $tries = 3; // Max retry attempts public $backoff = 60; // Seconds between retries public $timeout = 300; // Max execution time in seconds public $maxExceptions = 5; // Max exceptions before marking failed

public function __construct( public int $reportId ) {}

public function handle(ReportService $service): void { $service->generate($this->reportId); }

public function failed(\Throwable $exception): void { // Called when job fails after all retries \Log::error("Report job failed: {$this->reportId}", [ 'exception' => $exception->getMessage(), ]);

// Notify admin Notification::route('mail', 'admin@example.com') ->notify(new JobFailedNotification($this, $exception)); } } ```

Step 3: Monitor queue health

```php // App/Console/Commands/MonitorQueue.php namespace App\Console\Commands;

use Illuminate\Console\Command; use Illuminate\Support\Facades\Queue; use Illuminate\Support\Facades\DB;

class MonitorQueue extends Command { protected $signature = 'queue:health-check';

public function handle(): int { $pending = DB::table('jobs')->count(); $failed = DB::table('failed_jobs')->count();

if ($pending > 1000) { $this->warn("High pending jobs: {$pending}"); }

if ($failed > 100) { $this->error("High failed jobs: {$failed}"); }

$this->info("Queue status - Pending: {$pending}, Failed: {$failed}");

return $pending > 5000 ? 1 : 0; } } ```

Prevention

  • Always run queue workers under supervisor for automatic restart
  • Set --memory flag to restart workers before they leak too much memory
  • Configure $tries and $timeout on every job class
  • Implement failed() method for notification when jobs fail permanently
  • Monitor queue size with scheduled health checks
  • Use --max-time to restart workers periodically, freeing accumulated memory
  • Run php artisan queue:prune-failed regularly to clean old failed jobs