Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

bufferoverflow's avatar

How would you solve this scaling issue? (DB max connections)

I work on a Laravel app that handles thousands of jobs every hour. Since these jobs can take up to 6 - 10 seconds to complete, I dispatch serverless Lambda functions to perform the operation, and then once completed, the Lambda function posts the data back to my app.

The problem I'm facing is that sometimes there's lots of POST requests coming from Lambda and I hit the database max connections limit, so requests fail.

A part from applying a retry strategy, what would you change in this approach to be able to post many times without data loss?

I was thinking on implementing some kind of "ingest" but I though that adding those records is going to use database connections as well? 😅

Thanks!

0 likes
4 replies
Snapey's avatar

Some possibilities

  1. make sure the data from the lambda is as close to the shape of what you want as possible, eg, causing the app as little work as possible to process

  2. can your jobs write the data to the database themselves?

  3. examine your queue setup. If using database, just the actions of processing the job can cause a lot of transactions

1 like
bufferoverflow's avatar

@Snapey

  1. Yes, data comes in a simple array shape. I pass that info to an ->update() query.
  2. AWS makes a simple POST request to the controller. The controller is executing the update query.
  3. I'm using Redis connection, I think I should be fine here.

Your comment gave me an idea: should I dispatch another job as soon as AWS posts the data to the controller? So the update query is performed by a job, and not at the controller level?

Thanks!

Snapey's avatar

@bufferoverflow by queueing the update, you are not causing less work to be performed, you are causing more. The only advantage is that you can choose WHEN the work occurs and possibly move the work to a new thread which could be on a different resource.

My idea 2 was whether the job could update the database itself.

1 like
bufferoverflow's avatar

@Snapey So the initial job can write the database, but before that is calling an external api which takes 8-10 seconds to return the data. Since I have thousands of these jobs, I derivate this work to AWS serverless, and then just return the api response data to perform the update

Please or to participate in this conversation.