Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

jswoolf01's avatar

Best way to truncate very large data table?

Long story short: I maintain a large Laravel database app that runs mainly on PHP, with MariaDB as the database engine. One table in this database is a log of changes to another table. I recently discovered that due to a problem of some kind (not yet sure what), the log table had bloated to a frightening size: more than 167 million records, with the vast majority of them being duplicates: the same update actions being logged over and over again. I immediately disabled the logger function, so the log table is not accumulating new records. I've also made some other changes that I believe have fixed the core issue of repetitive updates. However, now I need to clean out all those duplicate log entries.

The simplest answer is to just wipe the log table. It is not using soft deletes. It has no child tables, so database integrity shouldn't be an issue. I'd lose everything in the log, but at this point that's not a problem. However, I've never faced any question like this before. What's the best way to go about wiping a table that has roughly 167 million records in it, without overloading and crashing the server? Can I use TRUNCATE safely, or do I need to whip up a script that runs during off-hours and deletes records from the table in chunks?

0 likes
5 replies
Glukinho's avatar

No valid SQL request would crash a database server, you may not worry about that. It is not a matter of table size in any case.

You can dump table structure using SHOW CREATE TABLE tablename or with mysqldump (there is an option for dumping structure only). After that drop a table and recreate with previously dumped structure.

I'm not sure if TRUNCATE does exactly the same as drop/create, but the result would be the same: clean table without rows.

What I wouldn't do is DELETE FROM tablename as it would be very slow and internal counter for auto increment primary key is preserved (so new row id would be something like 167 000 001 which doesn't seem preferable).

Another issue is reclaiming free space used by InnoDB files; the occupied space isn't released automatically especially if you have innodb_file_per_table = off. There are many articles about releasing occupied space; to be short, you have to make full database backup, set innodb_file_per_table = on and restore backup. After that all tables are stored in their own dedicated files and occupy only space really needed for data.

Tray2's avatar

Truncate basically just tell the table that it is empty, doesn't affect the server load at all. It's similar to deleting a file.

I would however suggest that you create a job that removes any log entries older than X days, and then run that every day, then you will not be back in this situation again.

Be aware that you can't just do a DELETE FROM logs WHERE created_at < NOW() - X, that would crash the database. You need to chunk it in iterations of 5-10K records, and I recommend using transactions, because you don't want a commit between each record.

alihamzahq's avatar

If you are truly okay with losing the entire log table, I would use TRUNCATE, not a chunked delete.

For this case, chunked deletes are mainly useful when you need to keep some rows, avoid long locks, or delete gradually while the table is still being used. But if the table is just a disposable audit/log table and nothing references it, TRUNCATE TABLE change_logs; is the cleaner option.

A few things I’d still do first:

-- take a backup if there is any chance you may need the data
CREATE TABLE change_logs_backup LIKE change_logs;
-- optional, only if you want a sample or full copy before wiping
-- INSERT INTO change_logs_backup SELECT * FROM change_logs;

TRUNCATE TABLE change_logs;

Important caveats:

TRUNCATE is DDL-like behavior in MariaDB/MySQL. It effectively drops and recreates the table, is much faster than DELETE, resets auto-increment, and cannot be rolled back once committed. MariaDB also documents that TRUNCATE TABLE causes an implicit commit.

So I would run it during a quiet/off-hours window, after confirming:

SHOW CREATE TABLE change_logs;

Check that there are no foreign keys referencing it, no active app code writing to it, and no long-running queries holding locks on it.

In short: if you need to preserve some rows, delete in chunks. If you genuinely want the table empty, use TRUNCATE.

Glukinho's avatar

INSERT INTO change_logs_backup SELECT * FROM change_logs;

It would take forever. For backup I'd rename change_logs to change_logs_backup and create change_logs from scratch. Same result but almost instantly.

1 like
jswoolf01's avatar

Thanks for the information and advice, all. I had the impression that truncating the table would take a while, but from what you've said I was wrong about that. Truncate does sound like the way to go.

Please or to participate in this conversation.