Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

Dave97's avatar

Laravel eloquent delete() 1205 Lock wait timeout exceeded

Hi, I have millions of rows in some table to delete regularly through a Job in a queue with following instructions

'DB::table('data_smth')->where('fetch_stamp','<',$limit_3m_date)->delete();'

It happens my code had some bugs so it didnt delete for 1 year which means the job need to clear now around 60 millions rows in the specific table. While everything is ok for deletion on other tables, i keep having for this table in production :

'General error: 1205 Lock wait timeout exceeded; try restarting transaction in /home/forge/website.com/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOStatement.php:142'

In local server everything is all right. I tried to increase

 wait_timeout 

on MariaDb server on digital ocean but no effect

Trying for test only

	DB::table('data_smth')->where('id','=',1)->delete();

send the same exception.

For info this table is almost permanently access by some other jobs in a queue which are adding new rows of data from differents external sources...

I searched a solution the entire day but i am still strugging. thks for help

0 likes
12 replies
guybrush_threepwood's avatar

Can you temporarily disable the cleanup jobs by stopping the workers or commenting out the code?

Maybe you could wait until the locks are lifted and try to delete those records manually in smaller batches or create a small script that deletes 50.000K rows at a time (by adding a limit to the delete statement).

guybrush_threepwood's avatar

Or you could update your code, remove all pending jobs and restart your workers:

DB::table('data_smth')->where('fetch_stamp','<',$limit_3m_date)->limit(50000)->delete();

Then manually fire the Job about 1200 times...

Dave97's avatar
  1. I will try to stop indeed all other jobs (which are adding new data) to the table. Even if i dont understand why all these jobs accessing to this same table dont fail while the only job in charge of deleting older rows (which are not accessed by any process) fails in the same time creating an exception.

  2. I didnt want to use

     chunkById(1000, function ($datasmth) {...)
    

cause if forces me then to loop on the set one by one but i could indeed use a script for 50k row at timein the future. But please note trying to delete only one row by

	 DB::table('data_per_pairs')->where('id','=',1)->delete();

strangely fails too.

Will try and come back to you. thks

1 like
guybrush_threepwood's avatar

@dave97

I'm not a database expert by any means, but I think it shouldn't be necessary to stop the jobs that are adding new data to the table as they won't generate any locks that restrict your delete statements. Just stop the job that's trying to delete 60M rows at a time and remove any scheduled jobs of the same type (it's possible that you have many jobs piled up dating from days back).

I think when you try to delete 60M rows, MySQL locks them with a transaction and doesn't let you update or delete them until the transaction finishes. And since that is taking forever, you end up with a MySQL timeout, the transaction reverts itself (so no rows are deleted), maybe the job starts again and you end up with those records permanently locked to the point that you can't even execute a single delete manually (as you're currently experiencing).

Deleting 50.000 records could take about 10 minutes, so you can imagine how long 60M would take. By that logic, you'll need to execute the job 1200 times and it could take about 200 hrs (about 8 days) to finish.

Dave97's avatar

A) I realise by digging in MySQL process indeed when a job clean fired up

	DB::table('data_smth')->where('fetch_stamp','<',$limit_3m_date)->delete();

the job clean failed after job expiration as it can be seen in Horizon dashboard but seems the process of locking the table for deletion is still there pending and causing wait_timeout when another job clean is fired up.

Indeed another test showed me this time jobs are labeled as paused/failed in Horizon while the mysql process is already running in the background.

B) This table continuously locked for a very long time might be solved with

	DB::table('data_smth')->where('fetch_stamp','<',$limit_3m_date)->limit(50000)->delete();

I will try

Dave97's avatar

I killed all jobs and all Mysql process and had a try with

	 DB::table('data')->where('id','=',1)->delete();

but same exception every time in 50s :

	PDOException: SQLSTATE[HY000]: General error: 1205 Lock wait timeout exceeded; try restarting transaction in /home/forge/website.com/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOStatement.php:142

I really dont have a clue what can create this and desperate after 2 days. The wait time out parameter in mysql is set at 600s. I might better drop the table

Tray2's avatar

What you can do is copy the information you want to keep into a temporary table then truncate the too big one and then move the data back to the table.

Something like this.

create table temp_artists as (select * from artists where id = 1);

truncate table artists;

insert into artists (name, `slug`) select NAME, slug from artists;

drop table temp_artists;

With your own table names and where conditions of course.

2 likes
guybrush_threepwood's avatar
Level 33

That's thinking outside the box, I think it's a much better solution. Especially since it looks like the number of rows you want to delete is much greater than the ones you want to keep.

You could also partition the new table by date in order to easily drop many records in the future.

Dave97's avatar

Thanks @tray2 , gives a good one shot solution

@guybrush_threepwood , gives a nice approach with partition of database (to implement outside of Laravel eloquent which doesnt support this)

I understand at least 2 things from this discussion :

	General error: 1205 Lock wait timeout exceeded

"The above simply means the transaction has reached the innodb_lock_wait_timeout while waiting to obtain an exclusive lock which defaults to 50 seconds. The common causes are:"

  • The offensive transaction is not fast enough to commit or rollback the transaction within innodb_lock_wait_timeout duration.

-> so i should avoid deletion process which duration is longer than wait_timeout duration

  • The offensive transaction is waiting for row lock to be released by another transaction.

-> if mysql wait_timeout> laravel job timeout , laravel queue and jobs management can be flagged expired/failed after their timeout duration while letting their launched mysql process still running so it is better not to start a 2nd or more mysql deletion process jobs while this 1st mysql process wait_time out duration is still not reached.

Very ackward such situation, but i will consider this as a limitation to be careful of...

1 like
ilgityildirim's avatar

Haven't touched Laravel since version 1.x, I was looking into another problem and came across this issue. Thus I can't provide direct solution but what it can be done through raw SQL query if your case is a similar one.

I don't know really how your app works so I'm just assuming other parts of the app is accessing / reading / analyzing the data from your table as you mentioned. Thus this is a potential fix if for the other parts of the app is OK to read next record set in the non-locked queue.

Thus another alternative for similar problem is to actually select "next not-locked" record set in the database.

Such as instead of using FOR UPDATE one can use FOR UPDATE SKIP LOCKED to skip locked record sets / rows and select the next available one.

Of course this can solve specific cases such as you are trying to select a row which is "locked" but is OK to select next record in the database. In such cases, if your RDMS and its version supports "SKIP LOCKED" you are in luck, this could be the solution for your problem.

Now, I'm off to digging through my own problem :)

Please or to participate in this conversation.