Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

JWO's avatar
Level 4

Best practice: Redis Queue is working

Hi,

it was happening again: I forgot to restart:queue after manually making some changes to my laravel app. So, a few days later I saw, that some mails from the queue have not been sent. After restarting my default queue worker in forge, they got sent immediately. I have tried some suggestions to avoid this in future, but nothing worked for me. What is considered best practice? What do you do? Your suggestions appreciated!

0 likes
14 replies
tisuchi's avatar

@jwo Not sure it's best practice or not but the good practice for preventing this issue is to use a process manager like Supervisor that can automatically restart your queue worker if it crashes or stops. Supervisor is a process manager that can automatically restart your queue worker if it crashes or stops. It can also monitor the queue worker and start it if it is not running. This way, you can ensure that your queue worker is always running and processing jobs. Additionally, you can set up email notifications or other monitoring tools to alert you if your queue worker stops, so you can take action quickly.

There are a few reasons why supervisor is considered good practice for running and managing queue workers in Laravel:

  1. Reliability: Supervisor ensures that the queue worker processes are always running and restarts them automatically if they crash. This eliminates the need to manually restart the processes and ensures that the queues are processed continuously.

  2. Scalability: Supervisor allows you to easily manage multiple queue worker processes to handle increased load. This makes it easy to scale your application to handle more queues and reduce processing time.

  3. Monitoring: Supervisor provides an interface for monitoring the status of the queue worker processes. You can easily see which processes are running and their resource usage, making it easy to identify and fix any performance issues.

JWO's avatar
Level 4

@tisuchi Thanks! As per forge documentation: Forge's site management dashboard allows you to easily create as many Laravel queue workers as you like. Queue workers will automatically be monitored by Supervisor, and will be restarted if they crash. All workers will start automatically if the server is restarted. So I guess what you are pointing out here is already part of Forge.... yes it s doing this stuff on it s own but when I forget to tell "hey I did ssh into my codebase and made a php artisan migrate" then it s silently continuing but the jobs are stucked. Or I am wrong?

tisuchi's avatar

@JWO Yes, that is correct. Forge's site management dashboard and Supervisor can help ensure the queue workers are running and will restart them if they crash. However, if you make changes to your codebase such as running migrations, you need to manually restart the queue workers for the changes to take effect. This is why it is a best practice to regularly check the status of your queue workers and make sure they are running as expected.

JWO's avatar
Level 4

@tisuchi thanks! "This is why it is a best practice to regularly check the status of your queue workers and make sure they are running as expected." - Manually?

Sinnbeck's avatar

I suggest make an actual deployment script.

  1. Make changes locally
  2. Commit to git and push to github/gitlab
  3. Run some script that pulls the code on the server, runs migrations, and restarts queues etc.

As far as I know this isnt part of Forge. You need to buy envoyer as well https://envoyer.io/

Personally I use ploi.io where its part of the product

2 likes
malsowayegh's avatar

@Sinnbeck yes that's really nice solution which will avoid making manual changes. I'm using something similar through GitHub by Using GitHub Actions. basically for any changes to master/main clear cache, configurations, routes....etc + restarting queues.

for my case I'm using docker so I'm building new images to be deployed every time and restart or recreate containers.

Sinnbeck's avatar

@malsowayegh Yeah exactly :) And nice solution to build new docker images. Proper use of docker 👍

JWO's avatar
Level 4

@Sinnbeck Thanks! I am doing this already - Make changes locally - Committing and pushing new code - having envoyer onboard what handles deploy scripts and doing a queue:restart - the only thing I do not have because I am afraid to mess up the live database is the migrate part and exactly that happens last time. (I had to change a column of db as I got notified that a string was too long for an insert) My question here would be: can I safely integrate a php artisan migrate to envoyer deploy scripts without doing something unwanted to live database .... ?

Sinnbeck's avatar

@JWO As long as you are sure to test it properly it should be fine. I have tests set up, and my migrations are run against my test database to ensure it works exactly as expected. I also test on exactly the same type of database as production (no sqlite)

JWO's avatar
Level 4

@Sinnbeck thank you! when it comes to testing I have to improve my skills - will give it more attention in future!

Sinnbeck's avatar

@JWO Yeah it is quite an important skill. A good trick to force yourself into it, is to test what you change, or what breaks. If you find a bug, write a test that triggers the bug and then fix it. Once the test is green, it means the bug is fixed. Same can be done for a new migration. Add a new field, then test that field specificly in some way :)

JWO's avatar
Level 4

@Sinnbeck thanks, sounds good! you run those tests only locally or on server too?

Please or to participate in this conversation.