GodziLaravel's avatar

Supervisor Issue: Queues Not Executing Despite Status 'RUNNING'

Hey,

I'm facing an issue with Supervisor in my Laravel app. Despite supervisorctl showing everything as RUNNING, queued jobs aren't executing. Checking the jobs table reveals no changes.

Oddly, restarting Supervisor (sudo service supervisor restart) resolves the problem temporarily, but I need a lasting solution.

Details:

Laravel version: 9 Supervisor version: 4.2.1 OS: Ubuntu 22.04.3 LTS Any insights or tips on troubleshooting this intermittent behavior would be greatly appreciated.

Thanks,

0 likes
8 replies
kiwi0134's avatar

Could you please show us the supervisor config for your application? Make sure to log your output so you can see if the worker itself is failing or crashing.

This is how I do it inside the supervisor configuration:

redirect_stderr=true
stdout_logfile=/path/to/your/logs/worker.log
GodziLaravel's avatar

@kiwi0134 thanks for your answer!

About the logs I didn't found anything from yesterday : this problem happened yesterday evening (since 16 hours)

I have 4 workers configurations :

[program:laravel-websockets-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /data/www/**************.com/artisan websockets:serve --port 6101
autostart=true
autorestart=true
user=www-data
numprocs=1
redirect_stderr=true
stdout_logfile=/data/www/**************.com/storage/logs/worker-websockets.log
[program:laravel-teamleader-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /data/www/**************.com/artisan queue:work teamleader --queue=teamleader --sleep=3 --timeout=600
autostart=true
autorestart=true
user=www-data
numprocs=1
redirect_stderr=true
stdout_logfile=/data/www/**************.com/storage/logs/worker-teamleader.log

[program:laravel-default-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /data/www/irp.icareweb.com/artisan queue:work database --queue=default --sleep=3 --timeout=180
autostart=true
autorestart=true
user=www-data
numprocs=1
redirect_stderr=true
stdout_logfile=/data/www/**************.com/storage/logs/worker-default.log

[program:laravel-absences-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /data/www/**************.com/artisan queue:work absences --queue=absences --sleep=3 --timeout=600 --memory=128
autostart=true
autorestart=true
user=www-data
numprocs=1
redirect_stderr=true
stdout_logfile=/data/www/**************.com/storage/logs/worker-absences.log

krisi_gjika's avatar

@GodziLaravel are you sure your queue names match with the jobs on your database? since you have only one worker per queue, are you sure the worker in not stuck processing a long running job, or retrying failing jobs?

GodziLaravel's avatar

@krisi_gjika Yes I'm 100% sure about I also verified all available logs related to these jobs and I found nothing special! also once I restarted the supervisor everything works well ! It's like the supervisor was paused since 16 hours ! I think it's related to the supervisor

krisi_gjika's avatar

@GodziLaravel try setting a --max-jobs= limit on your worker so that supervisor restarts the worker after running N number of jobs

kiwi0134's avatar

@GodziLaravel Setting the max jobs option, at least in my opinion, is more like a workaround than a fix. If this happens, you can also always check if your workers are actually still running.

# Check for any running PHP processes.
$ ps faux | grep php

Supervisor is a pretty sophisticated piece of software that never failed on me before. Did your issue affect all your workers?

Please also make sure, that you actually mean to configure the connection (teamleader, database) and not the queue itself. The first parameter (not the --options) is the queue connection to use. The connection is something like database, redis, beanstalkd etc., while the --queue is an internal identifier you can use to separate your jobs into multiple queues.

One worker can also process multiple queues from the same connection simultaneously by simply passing it as a comma seperated value to the --queue option: --queue teamleader,default. This would make the worker to process jobs from the teamleader and the database queue in that order. So if your queue has 100 jobs in the default queue and then one inside the teamleader queue, it would first process the teamleader-job before it continues processing the other jobs.

RanjanNagarkoti's avatar

@godzilaravel did you find the fix for this issue? I'm also facing same issue from more than Months. Every day I have to restart 4-5 times.

Please or to participate in this conversation.