Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

jnbn's avatar
Level 13

Queue (SQS) Problem on Vapor

We're using vapor on our project with 3 different queues on SQS.

We have a command that generates about 10-15 jobs for a queue that only its using.

Other than that we have few jobs on our console kernel which one of them is triggered every minute (there are no queues specified for those)

$schedule->job(new ExpireFinishedEvents)
            ->everyMinute();
            ->withoutOverlapping();

        $schedule->job(new SITSImportJob())
            ->dailyAt('21:00')
            ->withoutOverlapping(); // creates 5 different jobs on import queue

        $schedule->command('telescope:prune')->daily();

        $schedule->command('command:import-people-data')
            ->everySixHours(); // creates 10 different jobs on import-people queue

As far I can see, as soon as ExpireFinishedEvents job is called from kernel current jobs on the queue somehow stops stops working and the import process can not be completed -when I call artisan command:import-people-data it starts running but only import a few until expireevents job is called from kernel-. (we have different number of people imported on each run)

do you see any reasons about this case or am I missing / or dont know something about the sqs or vapor. (everything working fine on local with redis etc)

Thanks

0 likes
6 replies
damonb's avatar

+1 on this issue just appearing. I am dealing with a similar issue that wasn't a problem +1 week ago. I run a command that creates 20-25 jobs. The number of jobs that get processed is always random but never complete. I have been trying to solve this for the past 24 hrs to no avail.

damonb's avatar

Not sure if this helps, but I updated the vapor.yml property of queue-timeout to 10: queue-timeout: 10 and it appears that all the sqs messages are processing.

jnbn's avatar
Level 13

I've tried making queue-timeout: 600 but it didn't help on my case =/

damonb's avatar

@jnbn mine was set to 300 previously. I dropped it to 10 and it seemed to resolve. Not really sure as to why, but my system seems to be processing all its SQS messages at the moment.

jnbn's avatar
Level 13

Realised that we were using same queue names on different environments so it couldn't process all the jobs on the queue.

Started using SQS_SUFFIX and changed default queue names on vapor so seems ok for now.

Thanks for your help @damonb

1 like
itsmedave's avatar

Had the same issue. Didn't realize that I was using the same queue in different Vapor environments because of the name. But using SQS_SUFFIX resolved it, thanks @jnbn.

If you define the queue named notifications in two different Vapor softwares, they end up being the same queue in AWS SQS. Vapor doesn't create a differentiation between them both (a suffix or a prefix), even though they are defined in different Vapor projects.

My recommendation is that you always define queues this way in vapor.yml:

software1/vapor.yml

queues:
    - notifications-software1

software2/vapor.yml

queues:
    -notifications-software2

And use SQS_SUFFIX=-software1 or SQS_SUFFIX=-software2 respectively in the environment variables.

3 likes

Please or to participate in this conversation.