earmsby's avatar

queue worker on production

I've been playing around with Filament Exporting & Importing and it's pretty nice. Of course it uses job batches in order to process the imports and exports. In my local development (with herd) I've just run php artisan queue:work before trying to import/export and that's fine.

But I'm about to push this functionality up to the live production server (on Forge). Do I just run the queue:work command on the Forge server? Or is there something else (better?) I should do to facilitate the import/export jobs?

I'm new to jobs, queues and batches so am a little at sea about how to handle this.

0 likes
5 replies
Shivamyadav's avatar

Use the Supervisor

Automatically restarts on failure

Restarts on deploy

Keeps running in background

Persistent (doesn’t stop after SSH disconnect)

Automatically handles batches, Filament jobs, etc.

shahriar_shaon's avatar
Level 1

You don't have to run this command by hand on your production server: php artisan queue:work. Since you're using Forge, the best way is to set up a Queue Worker (Daemon) so it runs in the background and restarts automatically.

Here’s what you should do on Forge:

  1. Go to your Server → select your Site.
  2. Open the Workers (or Daemons) tab.
  3. Create a new worker with a command like:
php artisan queue:work --tries=3 --timeout=120
  1. Forge will keep this worker running permanently, restart it if it crashes, and start it again after a reboot
  2. Make sure your .env queue driver is set properly (e.g., QUEUE_CONNECTION=database or redis).

This is the recommended way of running queue workers in production. If you'd like, you can also run Horizon instead of queue:work.

earmsby's avatar

Thank you! That sounds exactly the solution.

LaryAI's avatar
Level 58

Absolutely, this is a common question when moving Laravel queue usage from local development to production.

Short Answer:
On production, you should run your queue worker as a daemon—supervised process—so that it automatically restarts if it stops, and you don’t have to manually run php artisan queue:work every time.

Recommended Approach on Forge (and other servers)

1. Use Supervisor to manage queue:work

  • Supervisor is a process monitor. Laravel Forge includes integration for Supervisor, making this setup easy.
  • Why? If your queue worker crashes, Supervisor will restart it automatically, ensuring your jobs don’t get stuck.

2. Setting It Up on Forge

On Forge, you can set up a queue worker via the “Daemon” tab in your site’s dashboard:

  1. Log in to Forge, select your server and site.
  2. Go to Daemons.
  3. Click Create Daemon and set the parameters:
    • Command:
      php /home/forge/yourdomain.com/artisan queue:work --sleep=3 --tries=3 --timeout=60
      
      (Adjust path and options as you need.)
    • User: Usually forge or whatever user runs your webserver.
    • Directory: Set this to your site's path, e.g., /home/forge/yourdomain.com
  4. Save/start the daemon.
  5. Now, your queue worker will always be running in the background.

3. Some Best Practices

  • Use the --daemon flag only if you have Laravel 8 or below. In later versions, it's default.
  • Use appropriate flags like --tries, --timeout.
  • Restart the queue worker after deployment so it doesn’t use stale code:
    php artisan queue:restart
    
    You can add this to your Forge deployment script.

4. Checking Worker Status

  • You can check on the jobs with:
    php artisan queue:failed
    
    and retry failed jobs with:
    php artisan queue:retry all
    

Summary

Don’t just run php artisan queue:work manually on production—set it up as a daemon with Supervisor or Forge's Daemons tab. This ensures your imports/exports (or any queued jobs) run reliably.

Official Laravel docs for more info:
https://laravel.com/docs/queues#supervisor-configuration

If you have any specific Forge issues, let me know!

Please or to participate in this conversation.