Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

PeregrineStudios's avatar

Fastest Way to Duplicate a ~9GB MySQL DB

I've inherited a project that's sort of a mish-mash of legacy code, multiple developers, and half-implemented pieces of Laravel (for example, they've brought in Eloquent but are only using the query builder). I'm tasked with implementing unit testing. It's actually been surprisingly straightforward so far, but now I'm coming up against the fact that there's no database seeding and inconsistently implemented migrations - the local database is a raw SQL dump of the production server. For obvious reasons, I would prefer to spin up a new database for testing.

The way I most frequently do this on my own projects is just to use database seeding with an in-memory SQLite database, but a) converting MySQL to SQLite isn't fun, and b) a 9GB SQLite database would be a bit much.

My best bet for database testing would, I believe, be to duplicate the local MySQL database and point to the duplicate when testing. However, it's still not a perfect solution. Due to the database size, a normal SQL dump takes ~1.5 hours to run, which makes resetting the database after running a full test suite a difficult proposition.

So I suppose what I'm looking for is either,

a) a foolproof way to rollback a series of database changes, or b) a much, much (probably impossibly) faster way to duplicate a ~9GB MySQL database

I have a feeling neither of these exist which would be unfortunate. The long-term plan is to implement database seeding but that's a long ways off.

Any database aficionados know of some black magic that would help me out here?

0 likes
4 replies
Cronix's avatar

I think for something like that with that size of a db, the fastest would be to backup the actual db files on the filesystem, like in /var/lib/mysql or wherever they are for you. Just make a copy into a different dir or something. Then do your tests. When you need a fresh copy, just copy the files that you backed up over the existing ones. For an innodb db, the files would be something like ibdata1, ib_logfile0, and ib_logfile1.

Anyway, that would be significantly faster than importing them via mysqldump or something since you're dealing with regular files on the filesystem instead of actually going through *sql. It would probably take just a few seconds.

Anyway, do some research on that and make absolutely sure you back stuff up before attempting. Take note of the owner/group and permissions the original files have. You could easily script this.

bobbybouwmann's avatar

For your tests you can use factories and databases transactions right? Only create the data you really need for your test. This way your test is faster and less random because of all the data you already have!

For your current problem, waiting for 1.5 hours but then be ready to perform tests doesn't really sound like the biggest issue you will get with this project ;)

I would advise you to take one or two days to take the current database and strip a lot of data so you only keep a small set. This might take a while and might be boring to do, however it will be easier to write your tests, develop locally and have a starting point later on for writing seeders.

This might help as well: https://www.devart.com/dbforge/mysql/studio/copy-database.html

shez1983's avatar

you say you have 9GB of data - i am sure the actual tables are not 9GB merely the data.. so you could also create a duplicate and remove extra data (or just create seeders/factories)..

goatshark's avatar

In regards to bringing up that data in an 'environment' that you can use... I have recently found it extremely useful to start mysql containers while binding a directory containing my dump file as /docker-entrypoint-initdb.d.

docker run \
-e MYSQL_ROOT_PASSWORD=some-password \
-e MYSQL_DATABASE=some-db-name \
-e MYSQL_USER=admin \
-e MYSQL_PASSWORD=some-password \
-v "$(pwd)/db":/var/lib/mysql \
-v "$(pwd)/db-backup":/docker-entrypoint-initdb.d \
--name some-container-name \
-d mysql:5.7

This creates a database named some-db-name and populates it with the data from the dump file that I have ./db-backup.

That doesn't exactly answer your question, but maybe gives you some other angles to try.

YMMV, obviously.

Please or to participate in this conversation.