Seamless database migration under high load and parallel deploys in Laravel
Hello, community! I have a high-traffic Laravel web application that needs to maintain constant availability. We are striving for seamless deploys and have run into a challenge during database migrations.
During the deploy process, we have both new and old instances of the application running concurrently, which can raise issues if the database structure changes and older instances are unable to correctly work with the new structure.
We have the following restrictions:
We can't take the service down during the deploy (so commands php artisan down and php artisan up can't be used).
Multiple instances of the application are running at the same time.
During the deploy, some instances are on the old version and some are on the new one.
Has anyone faced similar issues and has ideas about how I might structure the migration process such that the old and new application instances can share the same database without errors occurring?
Any suggestions or remarks are welcome!
@jlrdw
In our case, due to the high volume of traffic and the nature of our web application, any downtime can significantly impact our users' experience and the overall performance of our services. This is why maintaining constant availability is a key requirement for us.
@igorhmelevskoy It’s impossible to keep “old” versions of applications running if they depend on an older database schema that’s been changed to support new application versions.
Well I well understand your problem. I am doing continuous integration since more than 20 years. I admit I take the downtime (during night times) to avoid the problem you are having, but nevertheless it is possible with some effort to find a solution if you are willing to put enough effort into it.
Create the changed tables lets say TABLENAME_new
Make new models for all changes tables on lets say ModelnameNew
You have to prepare all old models to make all changes also for the new table. Overwrite save and create a.s.o. and create an instance of the new model there to sync changes to the new tables. The use of uuids is highly recommended b.t.w.
Then you need to create the changed tables and start a copy process from the old database to the new one.
The writes of the old models will then sync all the changes but still uses the old tables and logic.
Once everything is copied to the new structure you can delete the old models, rename the new models and deploy this in one step in no time.
To clean up the mess make a migration to delete and rename the database tables and set the old table names in the new models.
Hope you get the point.
It is so a lot of messy work that needs to be testet and there are pitfalls involved depending on the complexity of your database structure. It is in most times very questionable wether a downtime really is that bad. But if you want to walk this stoney way... it is possible.