Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

wsamoht's avatar

Collecting jobs to queue into one job to reduce SQS latency

I have a Laravel app using AWS SQS for queued jobs. We are running on Laravel Vapor but that doesn't really matter for this except that it doesn't support deferred work (work after the response is sent). The latency to add a job to SQS can range from 50-150ms. If one web request is queuing a lot of jobs up (notifications, syncing data to third party services, etc.) the latency can add up and slow down the response pretty quickly.

I came up with the idea of using a "Queue Bucket" that collects all jobs dispatched in a request as closures and right before the response is sent, creates one job that then handles dispatching all the other jobs. I am using middleware to call QueueBucket->run() that is attached to all web requests so I don't forget to call it somewhere.

Any thoughts on this? Is there a better way?

One caveat is that it doubles the model hydration queries as models are re-hydrated when QueueBucketJob runs and within each job that it queues up too. But I'll take that over slowing down a request by a second or more.

Here is the code. Using SerializableClosure came from Aaron Francis' recent YT video.

QueueBucket.php

QueueBucketJob.php

<?php

namespace App\Domain\Shared\Jobs;

use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Queue\Queueable;
use Illuminate\Support\Collection;
use Laravel\SerializableClosure\SerializableClosure;

class QueueBucketJob implements ShouldQueue
{
    use Queueable;

    /**
     * @param Collection<int, SerializableClosure> $jobs
     */
    public function __construct(
        private Collection $jobs
    ) {}

    public function handle(): void
    {
        $this->jobs->each(fn (SerializableClosure $job) => $job->getClosure()());
    }
}

Examples of adding jobs/queuable work to the bucket

I am actually using a Facade for QueueBucket so that is why the methods are being called statically. QueueBucket is also instantiated in the Service Container as a scoped instance (a single instance per request).

QueueBucket::add(fn () => SendNewUserAdminNotification::send($user));

QueueBucket::add(fn () => App::make(UpdateEmailListSubscriberAction::class)->onQueue()->execute($user));

QueueBucket::add(fn () => $user->notify(
    new AccountEmailVerificationNotification(
        $emailRecord->new_email,
        $emailRecord->verification_code
    )
));
1 like
15 replies
LaryAI's avatar
Level 58

Your approach is solid and is a common pattern to reduce queue latency in high-throughput scenarios. By batching the job dispatch into a single SQS call, you’re minimizing the network round-trips and the overhead of multiple SQS requests per web request. This is especially helpful when you’re on Laravel Vapor, where you don’t have access to after-response hooks.

A few thoughts and suggestions:

1. Your Approach is Sound

  • Batching jobs into a single "bucket" job is a good way to reduce SQS latency.
  • Using SerializableClosure is a smart way to capture the work to be done, especially when the jobs are not always serializable themselves.

2. Potential Improvements

a. Use Real Jobs, Not Closures (If Possible)

While closures are flexible, they can make debugging, retrying, and monitoring harder. If you can, consider collecting actual job instances (that implement ShouldQueue) instead of closures. This way, you retain all the benefits of Laravel's job system (like retries, tags, etc.).

Example:

// Instead of:
QueueBucket::add(fn () => SendNewUserAdminNotification::send($user));

// Do:
QueueBucket::add(new SendNewUserAdminNotification($user));

Then, your QueueBucketJob would simply dispatch each job:

public function handle()
{
    $this->jobs->each(fn ($job) => dispatch($job));
}

b. Serialize Models Carefully

You mentioned double hydration. If you’re passing Eloquent models to jobs, they will be serialized and re-hydrated. To minimize this, always pass only the model IDs to jobs, and re-query in the job’s handle() method.

c. Consider Chunking for Large Buckets

If you ever have hundreds of jobs in a single request, you might want to chunk them into multiple bucket jobs to avoid hitting SQS or Lambda payload limits.

$chunks = $this->jobs->chunk(50);
foreach ($chunks as $chunk) {
    dispatch(new QueueBucketJob($chunk));
}

d. Middleware Placement

You’re already using middleware to call QueueBucket->run(). Make sure it’s at the end of the middleware stack so it runs after all work is done.

3. Example Refactor

Here’s a version using job instances:

QueueBucket.php

namespace App\Domain\Shared\Jobs;

use Illuminate\Support\Collection;

class QueueBucket
{
    private Collection $jobs;

    public function __construct()
    {
        $this->jobs = new Collection();
    }

    public function add($job): void
    {
        $this->jobs->push($job);
    }

    public function run(): bool
    {
        if ($this->jobs->isNotEmpty()) {
            dispatch(new QueueBucketJob($this->jobs));
            $this->jobs = new Collection();
            return true;
        }
        return false;
    }
}

QueueBucketJob.php

namespace App\Domain\Shared\Jobs;

use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Queue\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use Illuminate\Support\Collection;

class QueueBucketJob implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, SerializesModels;

    public function __construct(
        private Collection $jobs
    ) {}

    public function handle(): void
    {
        $this->jobs->each(fn ($job) => dispatch($job));
    }
}

Usage:

QueueBucket::add(new SendNewUserAdminNotification($user));
QueueBucket::add(new UpdateEmailListSubscriberAction($user));
QueueBucket::add(new AccountEmailVerificationNotification($emailRecord->new_email, $emailRecord->verification_code));

4. Tradeoffs

  • Pros: Drastically reduces SQS latency per request.
  • Cons: Slightly more complex, and jobs are not immediately visible in the queue (they’re nested in a bucket job).

5. Conclusion

Your solution is good and is a proven pattern. If you can, prefer passing job instances over closures for better integration with Laravel’s queue system. Otherwise, your current approach is perfectly valid for your use case.

Let me know if you want a more detailed implementation or have any follow-up questions!

wsamoht's avatar

@LaryAI Using real jobs instead of closures works well for standard/direct job classes. What about for Laravel Notifications where they are indirectly dispatched using $user->notify()?

wsamoht's avatar

@Glukinho That's not supported on Laravel Vapor though last time I knew. Bref supposedly does support it.

wsamoht's avatar

@Glukinho Nope :) Nothing is supported on Vapor that runs after the response is sent/returned. This was originally a limitation of Lambda. But now that Lambda supports streams, it should be possible but hasn't been added to Vapor yet.

EDIT: Streams might not be the answer after reading the AWS post again, but doing work after the response returns seems possible now especially since bref somehow supports it.

EDIT EDIT: Just found this AWS blog saying streaming could be used for this.

Glukinho's avatar

@wsamoht what about concurrency? https://laravel.com/docs/12.x/concurrency

You could at least do heavy tasks in parallel instead of one by one.

Your queue bucket seems ok, but, honestly, I would think about changing platform if my current doesn't allow such useful features. This is more promising than fighting with the platform...

wsamoht's avatar

@Glukinho I don't think so since it's mostly for the CLI and Concurrency::defer runs after the response is sent which isn't supported.

I agree about the platform but I am hopeful the Laravel team adds support even though they are focusing on Laravel Cloud. AWS is pretty generous with their free tier on Lambda so our costs are pennies per month right now ($39/month obviously for Laravel Vapor). We would probably save a little bit using Laravel Cloud at first but might get more expensive quicker as we grow. I need to look at the numbers again. Or we should switch to bref.

Glukinho's avatar

@wsamoht is $39/month "pennies"? I don't know your load and requirements, but a VPS would be cheaper, and you are not limited with strange Vapor constraints, having full Laravel power instead.

150ms for each queue call is ridiculous, it must be near to instant...

wsamoht's avatar

@Glukinho Our direct AWS Lambda costs are pennies. The Vapor cost is obviously not. VPS might be a little cheaper (need a minimum of four servers, two web and two workers for HA). I use Forge for another service/project (been a Forge customer since 2015) but for this project in question I don't want to have to worry about HA and server updates. That's the trade-off. And Forge doesn't yet support IPv6 so we get the additional charge of around $4/month/server on AWS just so we can use Forge. Believe me, I try to run as cheapest as possible with the infra management effort taken into consideration.

If we were to change infra, we would go to Laravel Cloud. The best of both worlds - fully supported Laravel features and no server management among other things.

The 50-150ms queue latency is not Vapor specific. It's AWS SQS specific. I've tested the latency myself and AWS says themselves:

Typical latencies for SendMessage, ReceiveMessage, and DeleteMessage API requests are in the tens or low hundreds of milliseconds. (source)

I appreciate your insight and help. I am not looking to argue infra. You are completely right, we should just not use Vapor and SQS...LOL. But, if we are, is there a better solution than what I've come up with?

Glukinho's avatar

@wsamoht Another options comes to mind: if queue is too slow and defer (or anything executed after response) isn't available, then serialize your heavy jobs to cache and set per-minute scheduled task to unserialize jobs from cache and queue them.

wsamoht's avatar

@Glukinho That's not a bad idea! Vapor even supports sub-minute scheduled tasks. Right now we are using DynamoDB for our cache driver and we are getting around 2ms response times.

Just to clarify, it doesn't matter the heaviness of the job.

Just running the following with SQS has the same time:

dispatch(fn () => 0);

I should clarify too that I shouldn't be saying "latency". I am actually measuring the time it takes to dispatch the job. Most of the time is probably related to latency though.

Benchmark::measure(fn () => dispatch(fn () => 0));

Testing with Redis (actually Valkey) as the queue driver locally, I am seeing around 10ms. I would have thought it would be faster than that. Either way, way faster than SQS.

EDIT: Testing with Redis doing a Benchmark with 1,000 iterations, the time is 2.8ms.

Sadly, it looks like Vapor only supports the SQS driver. Sigh....

I am also not in love with wrapping all work being queued with a closure. Maybe I could override/intercept at the framework level so it's an easier swap.

wsamoht's avatar

Interestingly, SQS time is being discussed today over on Jack Ellis' tweet over here...LOL. He uses Vapor for Fathom Analytics.

https://x.com/JackEllis/status/1945891353793028539

A while back, I stopped queuing certain internal things and just ran it in the request as it was faster than queuing them. But, the current situation includes external API calls so they need to be queued.

krisi_gjika's avatar

not sure I understand you QueueBucketJob implementation, it seems this is running all internal jobs in one? what happens when the third job in list fails, you would run 1 and 2 again?

wsamoht's avatar

@krisi_gjika They would only run multiple times if QueueBucketJob itself failed. Otherwise, it is dispatching each job onto the queue so they would run individually like any other job. But, you bring up a good point that I should maybe handle if QueueBucketJob itself fails. Though, worst case if it were to run multiple times, duplicate emails/notifications would get sent. All other work is mostly idempotent.

$this->jobs->each(fn ($job) => dispatch($job));

Please or to participate in this conversation.