sweet, would love to hear how it turns out for you. I've had it going for just over a day now myself haha
Nope, its been solid so far. I used kill -9 on the process manually from another ssh session and sure enough it kickstarted it back up the next minute the cron job ran :)
The benefit is that the worker daemon keeps the same copy of the app code running, so its a bit more efficient. I also read in other threads that the queue:work --daemon is preferred over queue:listen.
The solution I used that might be more resource efficient is this
Curious to hear your thoughts on this approach. It uses laravel's native withoutOverlapping logic that doing similar to what your solution is doing with the pid file. I think laravel just stores it in the storage dir.
What I have done in a similar situation (I have shared cpanel hosting and no way to install supervisor) I used the cron to run laravel's scheduler every minute and inside I used this:
So whats happening is, the first time it runs, it will start the queue worker in daemon mode, then on every minute it will basically use the withoutOverlapping to only run it again if the previous one crashed/exited/no longer running. This essentially produces a supervisor like functionality. At worst case it will take a minute before the queue worker comes back up in case of a failure / memory limit hit, but in most cases the first thread will stay alive and work through the queue. This is a better way to accomplish queue:listen like functionality without supervisor.
The caveat is of course, you need to run queue:restart whenever you deploy new code so that the daemon worker will restart and get fresh app code.
Hope this helps.