it causes a problem it needs to wait before it updates because of the atomic lock
I don't think that there is another way than waiting ...
Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.
I created a fintech app, right now i am using ->increment/decrement for the balance.. my current problem is that when a certain merchant has lots of traffic, it causes a problem it needs to wait before it updates because of the atomic lock.. I also have a ledger of the wallet transaction but using sum to get the real balance will be very expensive when it has a lot of records.. how do you handle this stuff?
it causes a problem it needs to wait before it updates because of the atomic lock
I don't think that there is another way than waiting ...
Stop running increment() or decrement() during the HTTP request. Dispatch a job (e.g., UpdateMerchantBalance). The API responds immediately, and your queue workers deal with the lock waiting in the background. The workers will still wait on each other for that specific row, but your web server won't hang and the user won't experience timeouts.
@eskiesirius You shouldn’t be using a single column to hold the balance. You should instead of some form of ledger table, where you write individual increments and decrements as their own rows, and then derive the balance from the sum of those increments and decrements. It’s not “expensive” if you have a proper index in place.
Sorry for the late reply guys @vincent15000 @imrandevbd @martinbean... so what i did was, i stored it on the cache and create a job that will sync..
I'm not sure that the cache is done for that.
Several solutions are possible, but it depends on the context :
Honestly I think that's what I would do as well. Use the cached data for view and check DB again for any operations.
It can perhaps be a temporary solution, waiting for a better one ?
I thought about another solution : @eskiesirius what do you mean when you say a lot of trafic ? How many operations per second do you have to handle ?
@kai-init It can perhaps be a temporary solution, waiting for a better one ?
I thought about another solution : @eskiesirius what do you mean when you say a lot of trafic ? How many operations per second do you have to handle ?
And for your other problem about the sum, you can add a sum field in a table and this field will always have the sum of all transactions. But if you do so, you have take attention to really be sure that all transactions are really added to this field and prevent all malfunction.
You can combine two logic to get the perfect route of what you want to achieve
wallets
- id
- user_id
- available_balance
- pending_balance
- version
- updated_at
Purpose:
fast balance reads
authorization checks
dashboard/API responses
Ledger Table
This is the accounting layer.
wallet_ledger
- id
- wallet_id
- reference
- transaction_type
- direction
- amount
- balance_before
- balance_after
- status
- created_at
Immutable
Never:
UPDATE ledger rows DELETE ledger rows
Corrections must be reversal entries.
This gives:
auditability
reconciliation
forensic traceability
Every Financial Operation Must Be ACID
Example debit flow
DB::transaction(function () {
$wallet = Wallet::where('id', $walletId)
->lockForUpdate()
->first();
if ($wallet->available_balance < $amount) {
throw new Exception('Insufficient funds');
}
$before = $wallet->available_balance;
$after = $before - $amount;
WalletLedger::create([
'wallet_id' => $wallet->id,
'reference' => Str::uuid(),
'transaction_type' => 'transfer',
'direction' => 'debit',
'amount' => $amount,
'balance_before' => $before,
'balance_after' => $after,
'status' => 'completed',
]);
$wallet->update([
'available_balance' => $after
]);
});
Instead of:
balance = truth
you now have
ledger = truth
balance = optimized projection
This means:
if cache fails → safe if snapshot corrupts → rebuildable if Redis dies → safe if worker crashes → safe
Because the ledger remains authoritative.
Please or to participate in this conversation.