Introduction
GitLab uses Sidekiq to process background jobs including merge request webhooks, CI/CD pipeline triggers, email notifications, and more. When the Sidekiq queue backs up -- due to insufficient workers, slow job processing, or a job storm -- webhooks are delayed, merge requests do not trigger pipelines promptly, and the entire GitLab experience degrades.
Symptoms
- Merge request webhooks fire minutes or hours after the MR is created or updated
- CI/CD pipelines start with significant delay after pushing commits
- Sidekiq queue page shows thousands of pending jobs
- Email notifications are delayed or arrive out of order
- Error message:
Sidekiq queue backlog: default queue has 5000 pending jobs
Common Causes
- Insufficient Sidekiq workers configured for the workload
- A single long-running job blocking the default queue
- Scheduled job (e.g., repository check, import/export) creating a job storm
- Redis connection pool exhausted, preventing job processing
- Sidekiq process crashed or restarted, losing in-progress jobs
Step-by-Step Fix
- 1.Check Sidekiq queue status: Identify the backlog size and affected queues.
- 2.
` - 3.# GitLab UI: Admin > Monitoring > Background Jobs
- 4.# Check queue lengths: default, mailers, pipeline, etc.
- 5.# Or via Rails console
- 6.gitlab-rails console
- 7.> Sidekiq::Queue.all.map { |q| [q.name, q.size] }
- 8.
` - 9.Increase Sidekiq concurrency: Add more workers to process the backlog.
- 10.```ruby
- 11.# In gitlab.rb
- 12.sidekiq['max_concurrency'] = 25 # Default is 10
- 13.sidekiq['queue_selector'] = true
- 14.# Reconfigure
- 15.gitlab-ctl reconfigure
- 16.
` - 17.Restart Sidekiq to clear stuck jobs: Restart the workers.
- 18.```bash
- 19.# Graceful restart
- 20.gitlab-ctl hup sidekiq
- 21.# If stuck, force restart
- 22.gitlab-ctl restart sidekiq
- 23.
` - 24.Identify and kill stuck jobs: Find jobs that are blocking the queue.
- 25.```ruby
- 26.# In Rails console
- 27.gitlab-rails console
- 28.> Sidekiq::RetrySet.new.each { |job| puts job.jid if job.klass == 'SomeWorker' }
- 29.> # Kill stuck jobs
- 30.> Sidekiq::RetrySet.new.clear
- 31.
` - 32.Monitor queue drain rate: Verify the backlog is clearing.
- 33.```bash
- 34.# Check queue size every 30 seconds
- 35.watch -n 30 "gitlab-rails runner 'puts Sidekiq::Queue.all.map { |q| \"#{q.name}: #{q.size}\" }.join(\"\\n\")'"
- 36.
`
Prevention
- Monitor Sidekiq queue lengths and alert when queues exceed 100 pending jobs
- Scale Sidekiq concurrency based on the number of CPU cores available
- Separate critical queues (pipeline, webhook) from non-critical ones (mailers)
- Implement job rate limiting for bulk operations that can create job storms
- Configure Sidekiq with Redis cluster for high availability
- Regularly review Sidekiq job performance and optimize slow workers