Hey guys, I think this is the right channel.
I'm doing some crawling on a couple of websites, and I decided to add in average 50 URL's to crawl to each job. Most of the times I'm crawling around 3000, so that gives around 60 jobs to process each time. So on each job, it has to search inside 60 URL's to get the content.
Since I've put the crawling on a queue, I've noticed that there's been a huge spike in my servers physical memory usage and number of processes used. This morning I ran the queue worker to do 60 jobs, it did about 6 jobs and my physical memory went up to 355MB out of 1GB and the number of processes to 36 out of 100.
And even though this was around 3 hours ago, the number of processes won't go down neither the physical memory usage.
Is this normal?
I'm running the scheduler as a cron entry, and in my kernel.php I'm executing the queue:listen command to see if there are any jobs.
$schedule->command('queue:listen --daemon')->dailyAt('11:35')->withoutOverlapping();