Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

amitRoy1999's avatar

Config for max concurrency for a singel job

I am using Laravel 11 with Horizon and a Redis queue. I have a job that runs daily — I dispatch 500 instances of this job for 500 numbers. The job calls an external API and updates my database; it takes 10–20 minutes depending on the API response. I want to know if there’s a simple configuration available where I can define, by job name, that only 100 concurrent instances of a given job class can run at a time. It would be great if there’s a solution at the supervisor level where a job won’t even start if the maximum concurrency for that job is reached, and when one finishes, the next job in the queue starts. I also don’t want to add another supervisor configuration for this.

0 likes
6 replies
Glukinho's avatar

If you want 100 jobs to run simultaneously you need to set up 100 workers. Each worker runs jobs one after another. Something tells me this will not satisfy you.

Maybe you can gather numbers in some temporary table and push them by batches of 100 entries in one job (or better scheduled task) using HTTP client concurrency: https://laravel.com/docs/12.x/http-client#concurrent-requests

Scheduled task can run every minute and check how many numbers are there in temp table. If there are more than 100 then take first 100 numbers and push to external API. If less - then exit.

Another approach is not to wait for API response in a job - just push a number to API, get task id (API must provide it for long-running tasks so you can request task result later) and exit. This way your hundreds of jobs will be processed quickly. If you need to fetch some result from API using task id, you can do it in subsequent chained job. I use this approach with API that takes video files and returns transcripts after some significant time - works fine.

Task id returned by API is stored by "send" job and retrieved by "check result" job using context: https://laravel.com/docs/12.x/context

"Check result" job releases back to queue using $this->release(60) when result is not ready yet; when the result is ready it exits normally. This way a worker is never stuck in useless waiting for API response.

rodrigo.pedra's avatar

You can use the WithoutOverlapping job middleware with a sequential key as the overlapping ID.

Something like this:

public function middleware(): array
{
    return [new WithoutOverlapping($this->number % 100)];
}

Reference: https://laravel.com/docs/12.x/queues#preventing-job-overlaps

The overlapping ID needs to be sequential for this to work as expected. Otherwise you can end up with fewer jobs being processed concurrently.

If your 500 numbers are not sequential, you can assign an index when dispatching your job:

foreach ($records as $index => $record) {
    MyJob::dispatch($record, $index + 1);
}

You'd take that index on the job's constructor and then use it as the overlapping key.

You still need 100 workers to process 100 jobs concurrently.

amitRoy1999's avatar

Thank you all for your suggestions. I also looked in other places and implemented your solutions, but the main issue with these approaches is that the job actually gets initiated every time — the worker starts the job, then goes to sleep or something else happens, and the job fails because of multiple retries. So the conclusion I came to is to add a new supervisor with a low max worker count, because I have many jobs in my app and modifying every job and handling all the edge cases would take a lot of time.

Glukinho's avatar

You're doing something wrong. Normally worker doesn't "go to sleep".

My guess your jobs take some significant time while worker's default timeout value is 60 seconds, and jobs get interrupted by timeout. You should start your worker with increased timeout:

php artisan queue:work --timeout=3600
amitRoy1999's avatar

Yes, I can increase the timeout but then the that particular worked will be sleeping alongside the job right?

Glukinho's avatar

I don't understand what you mean. If you start a worker with --timeout=3600 then any job handled by this worker will be interrupted forcefully after 3600 seconds (1 hour), if a job is not finished at that moment. Timeout is just a maximum execution time of a job.

I seriously doubt you have jobs running for 1 hour (you mentioned 10-20 minutes) so this value should satisfy you.

If you have very long jobs that may need 2 hours to finish - then set worker timeout to at least 3 hours, and so on.

If you mean a worker will not take new jobs while a previous one is not done - yes, a worker is actually executing a job and can take one job at a time. It means a worker doesn't take new jobs while current is running.

If you need to process multiple job simultaneously - you need to have multiple workers running.

See here: https://laravel.com/docs/12.x/queues#supervisor-configuration

Right terminology is:

A worker WORKS when he is executing a job at the moment.

A worker SLEEPS when there are no jobs executed at the moment:

  • either all jobs are done and a queue is empty
  • or all jobs are scheduled (delayed) to be run in future.

When a new job is ready to be processed, a worker stops sleeping and starts working, running this job.

Please or to participate in this conversation.