@GodziLaravel Setting the max jobs option, at least in my opinion, is more like a workaround than a fix.
If this happens, you can also always check if your workers are actually still running.
# Check for any running PHP processes.
$ ps faux | grep php
Supervisor is a pretty sophisticated piece of software that never failed on me before. Did your issue affect all your workers?
Please also make sure, that you actually mean to configure the connection (teamleader, database) and not the queue itself. The first parameter (not the --options) is the queue connection to use. The connection is something like database, redis, beanstalkd etc., while the --queue is an internal identifier you can use to separate your jobs into multiple queues.
One worker can also process multiple queues from the same connection simultaneously by simply passing it as a comma seperated value to the --queue option: --queue teamleader,default. This would make the worker to process jobs from the teamleader and the database queue in that order. So if your queue has 100 jobs in the default queue and then one inside the teamleader queue, it would first process the teamleader-job before it continues processing the other jobs.