Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

Burano's avatar

How to setup queue worker properly with Forge?

So I've created my forge server and everything and I've setup a queue worker NOT as a daemon (it breaks things in my application).

But I've noticed that occasionally my jobs will not run and the only way to fix it is to restart the worker. It's kind of annoying and I think I've perhaps just done some settings wrong.

My current settings are as so;

  • Maximum Seconds Per Job / 15
  • Rest Seconds When Empty / 10
  • Failed Job Delay / 10
  • Processes / 4
  • Maximum Tries / 3
  • Run Worker As Daemon (NO)

Can someone perhaps give me some insight into what is going on, or perhaps just tell me if there is something wrong with my settings? I primarily use jobs for processing webhooks as well as image conversions, so that should give you an idea of my use case.

Thanks!

0 likes
5 replies
bobbybouwmann's avatar

Your setup looks correct. I can highly recommend you use the daemon for the worker. This makes sure it keeps the queue running in the background.

Do you restart the queue on every deployment? That is a really important part of using queues with Laravel forge.

1 like
Annaro's avatar

@bobbybouwmann Hi Bobby, would you mind explaining what it does when you use a daemon for the worker? I don't understand it. You're saying that using a daemon for the worker ensures that it keeps the queue running in the background, right? I don't understand how this is different from the default behaviour of queue workers: "Forge allows you to easily start as many Artisan queue workers as you like. The workers will automatically be monitored by Supervisor, and will be restarted if they crash. All workers will start automatically if the server is rebooted." Thank you!

Burano's avatar

@bobbybouwmann Yes I use envoyer and have it in a lifecycle hook.

cd {{release}}
php artisan queue:restart;

The issue is using a daemon breaks a very critical part of my application that basically makes it unusable. (I have absolutely no clue why and after WEEKS of trying to fix it, it was safe to say I was overjoyed when I found out turning off the daemon fixed it. I really don't care to re-enable it and live through that nightmare again.)

What are my other options? Surely there must be another way to deal with this without using daemon?

bobbybouwmann's avatar

What part of your application breaks whenever you enable the daemon? I never heard of something like this...

Burano's avatar

@bobbybouwmann Sorry for the late reply. My application integrates with forum software and it causes various parts of the forum software to break such as granting/removing usergroups, remotely posting threads, etc... (I take advantage of all these features.)

I've done some more research and discovered that apparently some people said calling queue:restart from envoyer when using forge just doesn't work as expected. (Which is a shame considering they are both made by the same person and integration I think was a big part.)

But now I'm just using a forge daemon to execute queue:restart every hour. Not even really sure if this works yet, I generally have to wait a few days to see if it breaks.

To give more information about the problem; It would appear after 3-4 days, the worker will still report it is active, but it will stop processing jobs. Only way to fix it is to restart the worker. While this worker is like this, my server sits at 100% cpu usage and as soon as the worker is restarted, it drops significantly.

So I'm not really sure what the issue is to be completely honest with you, and I've never seen anything about this issue before.

My current running solution is that Envoyer was skipping or breaking the queue:restart deployment hook and so after x amount of deploys, the application is left sitting in memory and it causes some very bad things to happen. Can't really provide anymore information than that sadly since I barely understand why this is happening myself.

Please or to participate in this conversation.