Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

openparl's avatar

Remote API has a limit on total jobs that it is processing. How to handle?

Hello

I have a third-party API which is asynchronous and posts job results back to me with a callback. Unfortunately, it refuses to take new jobs if it is currently processing more than 50 requests from me.

How would you handle this? It seems my options are...

  • Send all jobs through a single queue in Horizon which has a 50-process limit and also make the job not merely dispatch the API call, but also wait() in a while() loop until it detects the arrival of the expected response in the database. (This feels very inelegant. Responses can take from a few seconds to many minutes depending on remote platform's load.)
  • Do what I'm doing now and fruitlessly search for a package which has seen this problem before
  • Fail to find one and post here in hope of expertise...

Feels like this is not an exceptionally-weird way for a consumable API to behave. Do you know of something that I'm missing?

Thanks

0 likes
5 replies
Glukinho's avatar

If you can request that API for available "slots", your solution might be:

  1. send requests to that API using queued jobs (one job per request)
  2. in a job, ask API for available slots, if there are ones - actually send your request. If no slots are available, delay the job for some time (5 minutes, for example).
// Jobs/RequestForeignApiJob.php

if ($foreign_api->availableSlots() === 0) {
    $this->delay(300); // adjust for your situation 
}

$foreign_api->sendRequest($data);

Yes, each call to API will cost you additional request, but this is reliable way.

You can put requesting for available slots in job middleware, if you wish.

1 like
openparl's avatar

@Glukinho Sadly the API doesn't have an endpoint that keeps count of my pending jobs, and I have to keep track of them myself or it sends a hard error, synchronously.

But you did set me in direction of what I think is a decent solution below, so thank you :)

Glukinho's avatar

Maybe simplier, adjust your job's $retries, $this->retryUntil() and $backoff and the job will retry itself after some time when your api gives you error (as long as exception is thrown in this case).

openparl's avatar

I am leaning towards the following:

  • My outgoing tasks have a status enum (new,in-progress,complete,failed) and
  • The API response callback will change a task's status from in-progress to complete or failed, so
  • Let's add "REMOTE_API_CONCURRENCY_LIMIT = 50" to my environment and then
  • Dispatch new outgoing jobs only through a scheduled task running every N seconds
  • That task:
    1. Counts how many jobs are sent but not completed or failed ( as eg $inprogress)
    2. Picks the oldest (50 - $inprogress) tasks and fires new jobs for those, and
    3. If $inprogress = 50, exits without action

This seems a lot more elegant than long-running jobs or mucking about with backing-off periods to me.

If there's something very dumb about that, someone please let me know :)

Glukinho's avatar

@openparl Don't overcomplicate. And don't calculate other side's limits, this is their responsibility, not yours.

Just make your request throw an exception in case of failed request and put this logic into queued job (with tuned delay/backoff settings), Laravel will retry failed jobs for you. That's all. See here: https://laravel.com/docs/12.x/queues#dealing-with-failed-jobs

If you access the API with Http facade, make sure it throws exceptions on failures:

$response = Http::get('https://your-api.com', $data)->throw();

As a bonus, your requests will be retried in case of any errors, not only related to no limits (API is down, no network connection etc).

Some things to consider:

  1. Always tune $backoff for your job, don't leave it default as Laravel will retry immediately many times and you will be banned by the API at all. I'd say, set at least 1 minute.
  2. If your queue driver is database, you can't have a job retried more than 255 times as 'attempts' column has TINYINT type. Either tune job's $retries, $backoff and ->retryUntil() to never reach this limit, or change type of column to INT. Otherwise Laravel goes crazy and your queues break.

Please or to participate in this conversation.