Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

inctor's avatar

Multi-instance Laravel challenges

Hey,

Currently we're running a multi-instance Laravel setup.

We have a few challenges connected to this.

  • Performance
  • Disk space
  • Maintainability
  • Long deployment process

The setup:

  • Web Server (PHP7, Nginx, Redis-slave, Scheduled Jobs)
  • DB Server (MySQL with loads of RAM and high IOPS Disk, Multi-Scheme, Each client has their own "scheme/database")
  • Redis Server (Redis-master for replication)
  • File Server (Mounted on each server for access to shared config files and source-files) This worked fine when we had a low/small amount of clients on our platform. However as we've grown, we've noticed some rather dramatic performance decreases.

Right now each client has their own NGINX config based on a template, each client has a clone of the codebase, in a sub-directory with a unique .env file specifically for them, for their DB Credentials or other unique config details(These are also supplied as fastcgi params in NGINX for web-requests).

This quickly begins to fill the server with a lot of redundant data and takes up a lot of storage. Besides we also need to run scheduled commands for each of the clients, which ends up eating a lot of RAM.

  • With a small amount of clients, this was not really an issue, but it's gotten more troublesome as time goes on.

Where i really want to move the system to is a setup like the following:

  • Multiple Web Servers (This requires the nginx sites to be shared on all servers)
  • Separated Scheduled Jobs Server (To remove the load of this from the web-servers)
  • DB, Redis and File server as mentioned before
  • Single Code-base, with NGINX-based configurations, as we already do for web-requests.
  • Split Storage-data for each client based on the client-code they have to keep content separated.

My "issues" with the mentioned "goal" is that i can't work out, how i would manage to separate the clients config details, when i have to do Migrations, or other CLI based operations for one specific client. Right now this is handled by the .env file, but if we're moving to a single code-base, then we can't have multiple of the same, and i can't keep replacing it for each client i have to do operations on. ie. running migrations on XX amount of clients who each have their own DB Schema.

I've read up a lot on the whole Multi-Tenant vs. Multi-Instance setups, and the solution for us, is probably a "mix" of both. As we'd like to keep a single code-base, but separated storage and separated DB Schema for each client.

Does any of you have some ideas or had similar challenges with a project?

I'd love to hear ideas, comments or suggestions for a potential solution. I'll credit the best suggestion in our source-code, if you fancy that :)

0 likes
13 replies
inctor's avatar

Shameless self-bump.. Someone gotta have some ideas? :)

pmall's avatar

Lets go step by step, what is the question exactly?

inctor's avatar
  • How would i go about handling things as migrations or other things that unique for each clients, for multiple tenants if they're all on the same codebase? Right now this is done in NGINX with fastcgi_params to feed the environment information for each client's DB Schema. But i can't pass those in when doing it in CLI.
martinbean's avatar

@inctor Do these sites differ in functionality? Or could you just create a multi-tenant version that serves multiple clients, with segregated data in your database and different themes depending on the hostname?

inctor's avatar

@martinbean They're identical, except the actual client data. We were forced to separating the client-data in the Databases to separate scheme's due to client-demands about content segregation and security stuff. Nothing i had any power over, and our biggest clients demanded it (50.000+) users from that client.

Originally it was one codebase and one DB Schema that was Tenant Scoped, and that made things a lot easier for us, but clients began demanding their content was separated completely from other clients, and we agreed that separate DB Schemas(Databases) met their requirements.

pmall's avatar

You can use --database option to php artisan migrate to specify the db connection to use. Now I guess there should be a trick to populate the config connections array. How many websites are we talking about?

martinbean's avatar

@inctor So why do you now have separate codebases if a single codebase (but different databases) was working?

inctor's avatar

@pmall I did not know about that trick. But yes, they would need to be populated, and then each clients credentials would have to be in the config file. Which is not optimal, as that is in Version Control. Which we'd like to avoid.

@martinbean Because it was only partially working. For Web-requests and such, it worked fine, but whenever we had to do something in the Terminal (CLI), with Artisan, it wouldn't work properly. Since we couldn't get it to "separate" the connections without having to enter their credentials in the config files.

inctor's avatar

@pmall Right now it's about 30 clients (websites/tenants) with in total about 120.000 users..

pmall's avatar

Id try to find a way to populate the db connection config list from a "master db" containing websites credentials on app startup.

gregrobson's avatar

I second @pmall - normally a multi-tenant app has two connections:

  • A master one that contains the details for each individual tenant
  • A connection that you set at runtime - e.g. If you login as [email protected] the first thing do is find out who they belong to... then set up a second connection based on acme.com's connection details.

Regarding schema updates: you would need some back end process that can upgrade schemas one-by-one and adapt your app deployment to match. It sounds like some of your database tables may be large and schema updates can take a while. As an example, if you were storing dates in a varchar (rubbish example I know!) and wanted to change that to a date type you would need to:

  • Add the new date column using a migration
  • Change your code to INSERT/UPDATE to the old and new column so records start to be synced.
  • Run a second migration - perhaps not all the rows have been updated recently? Copy all the current entries in the text column to the date column. You now have two identical columns
  • Change the code again - make it only reference the new date column
  • Migrate again and drop the varchar column (once you know that nothing at all is using it).

It sounds like you have a fairly advanced setup :) My advice would be to see if there are any parts of the app you can move individually, and tackle them bit by bit!

If tenant's need different features then using flags in the master and/or tenant database can allow you to keep one code base - you can then choose which customers get feature A, B, C and so on.

inctor's avatar

Thanks for the feedback so far @gregrobson & @pmall .

The setup is getting fairly advanced, but i'm not a systems architect, merely a humble backend developer :)

So i have to do it over time, after researching various topics online before applying it to our Production environment.

I think i have a decent solution to handling the various environments with a single codebase.

  • Exclude Database Config file form version control
  • Create file outside codebase with the credentials
  • Symlink that file to database.php on creating new clients/platforms.

That way all the credentials are localized, not the best practice, but it does make it easier.

Then all i need to do is write a custom deployment script, that'll loop all clients in the system, and running migrations with the --database parameter for each client in the config file.

This way i think i can keep it all in one codebase, with separate Databases, and separate storage directories.

Next to tackle is Scheduled-commands and Queue-workers. Would still need to have a queue-listener/worker-daemon for each client with the --database parameter attached to ensure i get the correct data. This is eating a lot of memory, but i think i'll just pull in a new server to handle this, so it doesn't affect the webservers.

And the same for scheduled commands.

gregrobson's avatar

@inctor - I think we're all humble backend devs at the end of the day!

Sounds like you have a good plan of action to sort everything out.

Please or to participate in this conversation.