Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

DigitalViking's avatar

Prevent race conditions and queue best practices

Hi Laracasts community,

I’m currently facing an issue in our application for which I haven’t yet found a fully satisfying solution. Let me give you some context:

I’m working on a multi-integration setup in the e-commerce space. The application connects our ERP system with multiple fulfillment partners. It handles various data flows (product data, stock levels, orders, and shipping information). Communication happens either within an ERP domain or a Fulfiller domain, with different integrations implemented there. All import/export operations run as separate jobs.

Currently, we’re still using the database queue driver, but planning to switch to Redis soon. The jobs are dispatched at scheduled intervals via Laravel’s scheduler. Each job is unique and based on an identifier generated from the resource it’s importing. We also log all processed jobs into a custom job_logs table, which is filled using events in a service provider.

❗ The Problem:

We’re experiencing race conditions whenever more than one queue worker is active. For example, around 200 orders come in during a single import. But once more than one worker is running, only about 6 orders actually get imported. The jobs appear in the jobs table, they are even logged in the job_logs table with a completed timestamp — but in the end, only 6 records make it into the database.

🔍 What I’ve tried:

I’ve done quite a bit of research and discussed the issue in depth with ChatGPT. I’ve also tried several things, including: • Switching to Redis (on the test server) • Using ShouldBeUnique and cache-based locking mechanisms • Using withoutOverlapping() • Building a custom cache lock per job

Unfortunately, none of these solved the issue.

Side note: At the moment, the orders are read from XML files on an SFTP server and then processed in individual jobs. This will soon be replaced by a webhook/API solution.

💡 Current idea:

I’m considering creating dedicated queues per job type, so that only one worker is responsible for each queue. But this feels like a workaround, since Laravel queues are designed to allow multiple workers on the same queue.

I also started looking into Laravel Horizon to better manage these queues.

🧠 My questions: • Has anyone experienced similar race conditions and has an idea of what might be causing it here? • What would be the best practice for this type of scenario? • Is Horizon a worthwhile solution in this case? • Is separating queues a good idea or more of a band-aid?

Thanks a lot for reading — I’m really looking forward to your insights!

0 likes
7 replies
Glukinho's avatar

How can we suggest anything not seeing a line of code?

You should have debug logs in your jobs to see what is going on inside and why a job is not actually done. Without this deep understanding all random ideas like switching to Redis or implementing ShouldBeUnique are most likely inappropriate.

You can invent proper recipes and solutions only after proper investigation of the problem.

Queue worker doesn't produce "race condition" by itself, even on database driver, only wrong logic does.

1 like
DigitalViking's avatar

Hi,

First of all, thank you so much for your response!

This is actually my first post here — and also my first time working on a project of this size and complexity.

To be honest, I’m not quite sure where to start in terms of sharing code, since there are a lot of classes involved in this process. Could you help me understand which parts would be the most relevant? Would it be best to start with the Job class itself, or the place where the job gets dispatched?

I’m happy to share whatever information you need.

I’ve already added some logging inside the Job class, and I can confirm that the job is being dispatched — however, the handle() method is never executed.

To get things started, I’ll post the Job class and its parent classes below.

Actual Job Class

First Layer Parent Class

Second Layer Parent Class

Job Dispatching (Runs inside another Job)

   public function run(): void
    {
        $deliveryNoteFiles = $this->importClient->getDeliveryNoteFiles();

        foreach ($deliveryNoteFiles as $file) {
            dispatch(
                new ProcessDeliveryNoteFile(
                    $this->erp,
                    $this->fileProcessor,
                    $file,
                )
            );
        }
    }

Thanks for your support!

Glukinho's avatar

job is being dispatched — however, the handle() method is never executed

How did you know that?

Are there any errors/exceptions in storage/logs/laravel.log? Do jobs appear in failed_jobs table?

As a first try to sort things out I would add some logging to a separate temp file to handle() method:

// config/logging.php
...
'channels' => [
	// ...
	'temp' => [
		'driver' => 'single',
		'path' => storage_path('logs/temp.log'),
		'level' => env('LOG_LEVEL', 'debug'),
		'replace_placeholders' => true,
	],
]
// ProcessDeliveryNoteFile.php

public function handle(): void
{
	Log::channel('temp')->debug("job [{$this->uniqueId()}] handle started");

	$this->fileProcessor->process($this->file);

	Log::channel('temp')->debug("job [{$this->uniqueId()}] handle finished");
}

And see how many "started" and "finished" lines are there in temp log file.

DigitalViking's avatar

How did you know that? Are there any errors/exceptions in storage/logs/laravel.log? Do jobs appear in failed_jobs table?

I’ve added logging whenever a job is dispatched. The number of dispatched jobs (200) matches the number of orders to be imported, and I can also see exactly 200 entries in our job_logs table.

As a test, I added a log entry as the very first line in the handle() method. When using multiple workers, that log entry only appears around 6 times. However, when using only one worker, I get all 200 log entries — so it seems like everything is working correctly in that case.

From this, I’m assuming that the handle() method is not being executed when multiple workers are running. The fact that it works with a single worker tells me that the job logic itself is valid and executable.

Also: • There are no errors or exceptions in the log files • The failed_jobs table is empty — nothing appears there

Glukinho's avatar

See here: https://laravel.com/docs/12.x/queues#preventing-job-overlaps

You may wish to tune your tries and maxExceptions properties on your job class accordingly. For example, leaving the tries property to 1 as it is by default would prevent any overlapping job from being retried later.

Isn't it your situation? I see WithoutOverlapping middleware is used and uniqueKey() can return something "not very unique", so your jobs are not executed due to insufficient retry attempts.

Try to set $tries = 100 on job class and remove ->dontRelease() from middleware and see is it any different.

LaryAI's avatar
Level 58

Absolutely, this is a classic scenario many face when scaling up job processing, particularly with order imports and multi-worker environments! Let's address each of your core issues and questions with best-practice recommendations:


1. Root Cause: Why Does This Race Condition Happen?

Your description suggests lost updates and concurrent processing are the root problems. Here's why it can happen:

  • Duplicate Work: If two workers fetch/insert/update the same order simultaneously, they might step on each other's toes, resulting in only one record making it through or updates getting lost.
  • Transactional Integrity: If you're not wrapping your DB writes in transactions or if your DB layer has no uniqueness constraint, it's much easier to "lose" writes.
  • Database Queue Driver: The database queue is more prone to these issues compared to Redis, because it doesn't handle locks/visibility as robustly and can suffer from "thundering herd" problems under heavy concurrency.

2. Best Practice Solutions

A. Ensure Database Integrity First

Regardless of job/queue mechanics, at the DB-level:

  • Unique Constraints: Ensure orders table has a unique constraint (e.g., order_number) so duplicate inserts simply fail.
  • Upserts: Use updateOrInsert / insertOrIgnore / onDuplicateKeyUpdate (MySQL) to ensure race conditions at DB level are handled.
  • Transactions: Wrap your order import logic in transactions, especially if there are related models being saved and/or complex business logic.
    DB::transaction(function () use ($orderData) {
        Order::updateOrCreate(
            ['external_id' => $orderData['external_id']],
            $orderData
        );
    });
    

B. Job Uniqueness: Laravel Solutions

  • ShouldBeUnique: If two jobs for the same resource run at the same time, this trait should prevent concurrent execution. But, it requires the cache (Redis recommended). If you had issues with this, ensure your key generation for the unique job is robust and that the cache is working effectively.

C. Queue/Worker Organization

  • Don't separate queues per resource just to "bandage" concurrency issues. While per-domain or per-task queues can help with scaling, they're not meant as a workaround for race conditions.

  • Start with a single, robust queue and multiple workers, only separate if specific jobs are truly heavy and should not block the rest.

D. Redis Queue + Laravel Horizon

  • YES, use Redis and Horizon!
    • Redis is highly performant and provides better concurrency handling, especially with visibility timeouts and atomic left-pop operations.
    • Horizon gives great insight and control over your queues and lets you manage concurrent worker balancing easily.

E. File Processing Caveat

  • If you read from files (e.g., XML over SFTP), ensure your application isn't picking the same file multiple times due to concurrency! Use a "pick/lock/rename/remove" strategy atomically to mark files as "in process".

3. Answers to Your Questions

  • Similar Race Issues?
    Yes—most happen due to missing DB constraints, lack of transactional integrity, or poor cache/lock configuration. Validate all layers.

  • Best Practice?

    • Use robust unique keys in DB (unique constraint).
    • Use upserts or other atomic insertions.
    • Use Redis as queue backend.
    • Use ShouldBeUnique/withoutOverlapping traits with Redis.
    • Use queue workers wisely, not as a workaround for race conditions.
    • Use Laravel Horizon for monitoring and balancing.
  • Is Horizon Worthwhile?
    Yes. It's the best tool for managing, scaling, and monitoring queues in Laravel.

  • Separate Queues: Good Idea?
    Only if there's a performance/scaling need per job type, not to solve concurrency issues.


4. Diagnostic Checklist

Double-check the following:

  1. Orders table has unique constraint.
  2. updateOrInsert (or equivalent) is used when saving orders.
  3. Your jobs’ unique IDs truly reflect the entity being processed.
  4. Laravel queue config is using Redis in production.
  5. If using ShouldBeUnique, cache is properly set to Redis.
  6. SFTP/XML files are not being picked up more than once.

5. Example Job With Upsert and Lock

class ImportOrderJob implements ShouldQueue, ShouldBeUnique
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public $orderData;

    public function __construct(array $orderData)
    {
        $this->orderData = $orderData;
    }

    public function uniqueId()
    {
        return $this->orderData['external_id']; // Or unique order number
    }

    public function handle()
    {
        DB::transaction(function () {
            Order::updateOrCreate(
                ['external_id' => $this->orderData['external_id']],
                $this->orderData
            );
        });
    }
}

6. Summary

Race conditions are best solved at the DB and job uniqueness level, not just by queue organization! Use Redis+Horizon, upsert operations, unique DB constraints, and one-job-per-resource logic. Horizon will help you optimize and monitor. Separate queues are for workflow isolation and scale—not for concurrency bugs.

If after all this, the issue persists, it's usually a bug in file picking or in the uniqueness of the order resource identifier.


Let me know if you need tailored code or configuration examples for any of these steps!

Please or to participate in this conversation.