Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

eskiesirius's avatar

Database Replication

Hello! i created a fintech app, and right now we have a high traffic(16 reqs/second.. not sure if this is a high traffic already) and now, i am thinking of using a database replication.. by setting the read/write config and 'strict' => true, ... should i not be worried of the integrity of the data? like the delays of the newly added/updated record?

1 like
9 replies
vincent15000's avatar

You will necessarily have differences between the primary database and the replica one, due to delay / latency while replicating the data.

You have to be aware of this delay / latency and you can't do without it (hmmm ... I don't know this kind of architecture, so I only think that you can't do without it).

Knowing this delay / latency, this is what I would do :

  • not critical data : let the server ask either the primary database or the replica one
  • critial data : force the server to ask only the primary database

With 16 req/s, I don't think that you need any replica database for performance purpose. So if you really want to replicate the database, you should rather do it for availability (in case of failover) and not for performance.

eskiesirius's avatar

thank you for this insight! i am thinking of using the replica when showing metrics or exporting reports

1 like
imrandevbd's avatar

16 reqs/second is actually quite low for a modern SQL database. You should be able to hit 500+ reqs/s on a single instance without breaking a sweat, provided your indexing is solid. You might be over-engineering a bit early.

That said, if you want to move forward with replication for reports and metrics, that’s exactly the right use case for it. To handle the integrity/delay concern you mentioned, Laravel has a sticky option in the database config.

'sticky' => true, When sticky is enabled, if you perform a write during a request cycle, Laravel will immediately switch to using the write connection for any subsequent reads within that same request. This solves the "read-your-own-writes" problem where a user saves data but doesn't see it on the next line of code because the replica hasn't synced yet.

1 like
vincent15000's avatar

I didn't know about this option either.

Have a look at the documentation ;).

Please or to participate in this conversation.