Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

Bartude's avatar

Laravel Queue takes up too much physical memory and processes

Hey guys, I think this is the right channel.

I'm doing some crawling on a couple of websites, and I decided to add in average 50 URL's to crawl to each job. Most of the times I'm crawling around 3000, so that gives around 60 jobs to process each time. So on each job, it has to search inside 60 URL's to get the content.

Since I've put the crawling on a queue, I've noticed that there's been a huge spike in my servers physical memory usage and number of processes used. This morning I ran the queue worker to do 60 jobs, it did about 6 jobs and my physical memory went up to 355MB out of 1GB and the number of processes to 36 out of 100.

And even though this was around 3 hours ago, the number of processes won't go down neither the physical memory usage.

Is this normal?

I'm running the scheduler as a cron entry, and in my kernel.php I'm executing the queue:listen command to see if there are any jobs.

$schedule->command('queue:listen --daemon')->dailyAt('11:35')->withoutOverlapping();

0 likes
5 replies
lara65535's avatar

From what I've been reading, queue listen runs 24/7. It is like npm run watch, constantly executing as fast as it can without remorse for the system.

I've heard, and personally use, php artisan queue:work --once. This why it will run once and will process one item in the queue and then terminate. This also keeps memory usage low.

fideloper's avatar

They aren't meant to be called from a CRON task, altho calling it like with the withoutOverlapping() I guess could work. However I'm a bit suspect of that, I have a feeling calling it from the CRON task may be related.

Check this out to see about using supervisord (if your hosting supports it): https://serversforhackers.com/monitoring-processes-with-supervisord

Example supervisord config for a laravel queue worker: https://laravel.com/docs/5.4/queues#supervisor-configuration

chaudigv's avatar

Resource Considerations

Daemon queue workers do not "reboot" the framework before processing each job. Therefore, you should release any heavy resources after each job completes.

For example, if you are doing image manipulation with the GD library, you should free the memory with imagedestroy when you are done processing the image.

Please or to participate in this conversation.