Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

chrismay's avatar

Separate queue for each job

I was wondering if there are any drawbacks if I dispatch each jobclass to its own queue.

e.g. App\Jobs\SendWelcomeEmail is dispatched to a App\Jobs\SendWelcomeEmail queue.

I plan to do so since it allows me to manage the queue more easily:

e.g. I could see how many pending entries are there for the App\Jobs\SendWelcomeEmail job by calling: Queue::size('App\Jobs\SendWelcomeEmail');

Another usecase is that I have a scheduled job that runs every minute, but I only everwant a single one of those jobs in the queue. This could easily be achieved by having a separate queue for the job.

I use horizon to manage the queue and the setup looks like this:

class Queues { const DEFAULT = 'default'; const SCHEDULER = 'scheduler'; const PROCESS_RESULTS = ProcessResults::class; const MOVE_PROCESSED_ROWS = MoveProcessedRows::class; const SEND_WELCOME_EMAIL = SendWelcomeEmail::class ...

static function all() { $oClass = new ReflectionClass(get_called_class()); return $oClass->getConstants(); } }

horizon.php

... 'queue' => \App\Enums\Queues::all(), ...

I appreciate any feedback or criticism to this setup.

0 likes
6 replies
bobbybouwmann's avatar

To be honest having a separated queue per job is really overkill. In general you can group your jobs in certain categories and create separate queues for that. For example low, medium, high or something like this email, backend, scheduled

So for having 1 job running on on queue and making it managable you have other options as well. First option is putting it in the scheduler. This way you can make it run every minute without needing a queue. You can then use the prevent overlapping feature: https://laravel.com/docs/5.8/scheduling#preventing-task-overlaps

Try to keep it simple ;)

pilat's avatar

I also have a similar need: a separate worker for each "uniq key" that I define on a job. For example, there is an external API that imposes an RPS (say, 7 reqs per second) for its consumes and I want not to overuse it. One approach would be to send those requests one-by-one. It ensures that the requests would not be thrown away (they'll just wait for their turn), and, the important part: they would be processed in first-in-first-out manner.

Now, suppose I have several tenants, each with their own API endpoint, and each endpoint's RPS is counted separately. If I process all such requests in a single (for the whole app) worker -- the queue will grow unnecessarily. I cannot use "multiple workers+WithoutOverlapping" either as it breaks the FIFO design (and it actually fails the "excessive" jobs, which need to be fought separately) . The ideal solution would be if the queue manager dynamically creates a separate single worker for each of the tenants. The job class could have a code like this in its constructor: $this->singleWorkerKey = $message->tenantId;

pilat's avatar

An example: for sending a Telegram message, the $singleWorkerKey could be a combination of a bot ID + channel ID, as Telegram has RPS=1 per each channel. And the handle() method could have "sleep(1);" at the end of each request to ensure that this limit is not reached (time of request + 1 second = definitely > 1sec)

krisi_gjika's avatar

@pilat why not throttle you API requests via cache locks? Say the job attempts to get a lock before making the API call, if it can't in a reasonable time, release the job back to the queue with a calculated wait time. When the job is attempted a number of times without being able to get a lock it would fail as a normal job. The job that got the lock and sent the API call can sleep for the remaining time needed to not trigger the 429.

Of course with multiple tenants, your cache store needs to be central in case you want to throttle for all, or prefix the lock with some tenant ID.

pilat's avatar

@krisi_gjika this is what I can do, but I'd rather have all outgoing requests to wait for their turn instead of just being re-scheduled back to queue. Say, a script wants to update some resource in the external API and then immediately get fresh version of that resource for syncing with my app's database. If I don't ensure "first in first out" protocol, I might have a situation that the second request (the one that reads supposedly updated resource) to come earlier than the "update" one and that is the problem.

Please or to participate in this conversation.