theUnforgiven's avatar

deadlock issues

I'm experiencing deadlock issues:

SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction

I wondered what ways/software can help relieve this from happening?

0 likes
14 replies
JeroenVanOort's avatar

Can you explain when this happens? What are your trying to do?

theUnforgiven's avatar

Trying to add records to the same tables from multiple points at the same time.

theUnforgiven's avatar

I'm guessing Beanstalk would be the best option here right?

theUnforgiven's avatar

Anyone else encountered this and what was the solution please?

theUnforgiven's avatar

Ok maybe thats worth looking into, although I don't make the decisions, so just looking for solutions and this is a good starting point.

Thanks @willvincent have you any other recommendations to using something like Beanstalkd for this also?

jekinney's avatar

You must understand the problem in order to find the right solution. I have no idea if you do or not. But the articles referenced, at least the few I glanced over, provide that.

So saying beanstalk might be the solution is probably inaccurate. From what I am guessing is some transaction is trying to do its work while the db is locked performing another transaction. This can happen with queues and non-queues trying to perform transactions at or around the same time. In theory the queue will retry, so probably not a queued action.

Like many things, db lock is arguably something you don't want to change on the database level, but in code as it's trying to ensure data consistency.

I think of it as a user hitting a submit to fast and the db is still processing the first transaction, so it'll refuse subsequent request till done. I.e. Processing in order it was received.

One trick I've done is on your queue add a priority integer. Queue all requests. Then cycle then process the queue by priority. So something needs processed ASAP priority 1, other lower.

theUnforgiven's avatar

Seems like a queue related issue from what I can see.

Stack trace:
#0 /srv/bl/vendor/laravel/framework/src/Illuminate/Database/Connection.php(618): Illuminate\Database\Connection->runQueryCallback('select * from `...', Array, Object(Closure))
#1 /srv/bl/vendor/laravel/framework/src/Illuminate/Database/Connection.php(324): Illuminate\Database\Connection->run('select * from `...', Array, Object(Closure))
#2 /srv/bl/vendor/laravel/framework/src/Illuminate/Database/Query/Builder.php(1431): Illuminate\Database\Connection->select('select * from `...', Array, false)
#3 
sngular's avatar

Run into the same problem today... no clues up to now

SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction (SQL: update jobs set reserved = 1, reserved_at = 1479156290, attempts = 1 where id = 40116)

Maybe would switch to Redis.

shez1983's avatar

do you happen to have TWO workers running? seems like something else is trying to access that table or particular row but it is being locked by something else?

(i am not expert enough to know whether mysql locks the whole table or just the row for each process?)

1 like
DigitalKrikits's avatar

@shez1983 your reply helped me with a similar race condition. I just set the supervisor numprocs setting to 1. I only tested a couple of times but my issue seems to have gone away. Thanks !

Adarsh_Sojitra's avatar

I have the same issue. I have no idea what's happening because I don't have failed jobs too! The number of failed jobs I was able to find from 7.4MB of Worker log was 3.

I tried looking for this issue on the internet but I was unable to find the solution as it looks like it's a very complex issue.

Any solutions from this thread? I am using 6 queues and 12 queue workers using Supervisor. It looks like multiple queue workers are trying to lock the same record. But how is it possible as I have already defined --tries=3.

achatzi's avatar

I encountered this issue recently and from what I understand, the problem lays with multiple workers on the same queue.

I had 10 workers running the same queue and I always got a deadlock with the jobs table because some worker was trying to delete a finished job while another one was trying to update some other job.

I changed the driver to redis in order to solve it completely (and because I had timeout issues as well).

EDIT: I also found this article that explains things in detail https://ph4r05.deadcode.me/blog/2017/12/23/laravel-queues-optimization.html

Please or to participate in this conversation.