Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

realtebo's avatar

Queues: Deadlock found when trying to get lock; try restarting transaction

We WERE USING REDIS, but we can no more use it because we're enqueuing so much data that is consuming 32 GB od ram on server.

We returned using database, but now we have a concurrency problems, because a queue is handled from supervisor using from 8 to 24 processes.

Why does redis handle concurrency when dealing with jobs and mysql no?

How can we fix mysql problems?

0 likes
26 replies
Tray2's avatar

Locks occur naturally in the database when one record is locked for update and another wants to update it as well. Are you using database transactions in your jobs?

I suggest either trying to pinpoint the jobs that are trying to update the same record and not to run them at the same time. You can also check FOR UPDATE NOWAIT and SKIPPED LOCKED. No wait makes the databas s not wait for the record to be released and throws an exception, while skip locked skips the locked record and continues with the next one.

https://programming.vip/docs/new-features-for-mysql-8.0-nowait-and-skip-locked.html

1 like
realtebo's avatar

@Tray2 I get this error from the queue manager, not inside one of job !

It'is happening when laravel is selecting a job from the job table. !

Switiching to redisa remove the problem, but we cannot use no more redis

We are able to reproduce this behaviour dispatching 1M jobs that are simply doing an echo, so no db access at all

Tray2's avatar

@realtebo Of curse it would lock up everything.

You would get 1M deletes on the jobs table, that needs to update the index on the table. So it might not the table but rather the index that causes the deadlocks.

I suggest dropping the index on the jobs table and try again to see if the deadlocks go away or at least decrease.

Then again why would you run so many jobs at almost the same time?

realtebo's avatar

@Tray2 tomorrow i'll try removing the index.

We have 1 milion of products. We need to recalculate prices on ... ehm.. 125k of them every day.

This involves some moer steps, so we broke a jobs into small ones to better to balacing.

Tray2's avatar

@realtebo I would suggest using one batch to update the prices but chunking about 5000 records per iteration. That way you will use less resources and you can probably keep the index.

realtebo's avatar

@Tray2 Yes, we planned also this try. Increase work for single job to reduce at minimum the possibility 2 process of same queue worker try to handle the same job. But of course I'm very SAD that Laravel is not ensuring it will never happening. Now I'm no more sure that 2 jobs are not handled 2 times from 2 parallel processes on same queue.

Sinnbeck's avatar

@realtebo I don't think laravel can handle it, if it isn't used as designed. It is still bound by how mysql works. You can try upgrading to the newest version of the database and see if that does any difference. But it is probably best to rethink your approach, as tray2 suggested

realtebo's avatar

@Sinnbeck No, wait, my actual use case is extreme but not my expectation about parallel handling.

If laravel doc tell me that I can have 2 processes handling the same queue i will expect that NEVER absolutely NEVER a queued job will be executed 2 times. It's a big fault and now I'm instructing my coworkers to do some tests to ensure that this is never happened on our projects.

If I have 10 processes to send email in a moment where there is ONLY ONE email to send i SIMPLY CANNOT RISK that this mail is sent twice, or more. This is not an abuse or misuse of Laravel, only a design flow weakness.

Tray2's avatar

@realtebo I my opinion that would never happen. There are more than one reason for a deadlock and normally that is with the something trying to update the same record at the same time, however in your case the deadlock is caused by the index trying to update it self and your 1M records.

All indexes comes at a price and since you give the Queue the daunting task of running two workers and handle hundreds of thousand of updates the database goes bananas trying to keep the index up to date.

This has nothing whatsoever to do with how Laravel functions, it's a database issue and in your case the trade off of having an index on the jobs table.

1 like
Tray2's avatar

@Sinnbeck In that post the author claimed that updates doesn't cause deadlocks but that's not entierly true.

realtebo's avatar

@Sinnbeck In the post the author suggests

"Add soft delete: deleted_ At, change the data into an update operation instead of a delete operation. Since it is an update, it will not cause a deadlock (no need to lock the record)"

how to ??? If I simply add this rows, does laravel handle it automatically?! How?

Tray2's avatar

@realtebo every update causes a lock, that is normal database behaviour.

Your problem is a non problem with an unreasonable amount of jobs. Use chunking in the job and let the database do its job.

realtebo's avatar

@Tray2 If we could do chunks we would do it. of course. We started with big jobs... and progressively broken down to smaller ones. Anyway, we're still developing so we can CTRL-A && CTRL-X

Tray2's avatar

@realtebo The thing is with jobs that does big updates on tables is that there almost never is a perfect way to do it. Not knowing the real need in your application, here are some suggestions worth looking into.

  • Night batches (Does it matter that it takes some time during the night?)
  • Run the same job several times with a few minutes between each run and update 5000 records each run

Regardless of which method you end up with is that you can't update that many records at once since it will use up resources in the database and on your server. You need somehow to make it do the DML in chunks. Otherwise your application will be non-responsive in worst case and in best a bit sluggish.

This is a problem you really can't just throw money upon to solve. You need to trim the queries and you need to abide by the technical limitations of the database. A shitty query is a shitty query.

Jonathan Reinink has done a course on Eloquent Performance Pattern that is really good and you should consider checking out.

https://eloquent-course.reinink.ca/

Have you considered storing the updated information in a temporary table and when all the updates has been done update the data in the permanent table.

I think it would be good if you could share some code examples of your jobs so we can understand a bit more what it does and give you tips regarding them.

We are kind of fumbling in the dark as to what can make your jobs more efficient.

realtebo's avatar

@Tray2 it's not my code that causes deadlocks !!! it's the job selection and update query of laravel itself to causes the problem !

My jobs are already chunked, but are a lot.

My DB is going well, no pressure on DB, why do you mention a problem that I do not talken about? This is a no-problem for us now. Our db is ok. Laravel code to handle jobs is not perfect. Stop.

We are moving to RabbitMW to avoid memory problems of Redis and concurrency problem of queues handled by DB

Tray2's avatar

@realtebo Since you obviously don't want to listen to any advice or share any code to help solve your problem, you are on your own. I wish you good luck with what you are attempting to do.

Sinnbeck's avatar

@realtebo to ensure that no job is being run twice, laravel will ask mysql to lock the row, until it can be updated by that process. So two database queries. If two processes were able to read the same row, then both could run the job. I am quite sure it uses locking to do this: https://laravel.com/docs/8.x/queries#pessimistic-locking

The deadlocks most likely comes from when the update is run and the index updates

Redis on the other hand works like a queue where you can pull off an item from the queue. There is no update of the index that can cause locks

I assume you found something telling you that RabbitMQ uses less ram than redis?

For such huge amounts of jobs, I would personally choose redis, but on a different server. But I would try and lower the hits on the queue

1 like
realtebo's avatar

@Sinnbeck You're totally right. The problem is due to 'too high' parallelism. We sharded what a job can read, so no job can eat records from others job, also moved to rabbit mq for some task.

RabbitMQ uses more disk and less RAM, redis, for what i can see, is totally trying to keep all in RAM, so when we need to pass 2M of serialized data [please, keep it, it's absolutely needed] or 130k of jobs, RabbitMQ it using A LOT less RAM. I even do not know why.

But the main problem was parallelism using MySql, and, I repeat, we ended creating 'numbered' queues, where every job is spanned using a % 8 to shard record id to access. It works. A little bit slower but it is allowing another bunch of less job in redis.

Actually, redis is still my preferred tools. It's first time I have so many jobs, data, and a customer who is not paying a good server...

we also moved queue handling to a different VPS, with a support DB, so we greatly lowered CPU usage on backend and CPU usge on main DB node. So, totale of RAM and CPU is equal to before, but we are having lot better usage, and cpu and db are less loaded so frontend is still responsive.

Hard but very good week of lesson for us for my work future

Thanks to all for patience.

realtebo's avatar

@Tray2 Because problem was not our code, I have nothing to share, it was a problem about parallelism and mysql and laravel. We changed some tools, refactorized to use same tools ina different place and in a different way. Problem resolved.

Thanks to you all for patience and for suggestions. At the end we listened near all of your suggestions of this topic

Sinnbeck's avatar

@realtebo anytime. Happy to hear you found a solution that works for you. And interesting that rabbit has such a big gap in ram/cpu utilization

Snapey's avatar

Sounds like you are queuing way too much data. Have you considered what is the least amount of data you need in each job?

1 like
realtebo's avatar

We ended to a big rewrite to

  • move some queuing to RabbitMQ
  • move some tasks from parallels queued jobs to very very high efficency sharded cron job
  • optimized what we pass to our jobs so redis will eat less RAM

Please or to participate in this conversation.