Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

rafito's avatar
Level 3

Optimizing Deploy Time for Large Laravel/Vue Monolith - Offload Build to Separate Server?

Hello everyone,

I’m working on a large monolithic project using Laravel and Vue, and I'm running into a major issue with CPU usage during deployments. Currently, whenever I run npm run prod, the CPU on my server hits 100%, and the build process takes around 15 minutes to complete, severely affecting performance.

My current setup involves an EC2 instance on AWS, and I’m wondering if there’s a way to offload the build process to a completely separate server instead of running it on my production server. My idea is to run the build on a different server and then simply copy the built files over to production.

Has anyone tried this approach, or is there a recommended way to handle such scenarios? I’m open to suggestions on best practices for optimizing the deployment process for large Laravel/Vue projects.

Any help or guidance would be greatly appreciated!

Thanks in advance!

0 likes
1 reply
experimentor's avatar

Hey @rafito, you can use AWS Code Pipeline, for this. It involves setting up Code build project which runs on an ec-2 instance tier of your choice. You need to add build script YAML file to the root of the project. This YAML file contains all your build commands, like composer install, php artisan test, npm install, npm run prod etc...

The result of this build step can be stored in an s3 bucket as a zip file. Next step to be configured will be Code Deploy project. This step utilises a file called appspec.yml which points to shell scripts. Basically these shell scripts contains cli instructions on how to extract contents of the zip file to the destination directory.

This is my setup for a CRM project:

  • Code merged to main branch on github triggers code pipeline
  • code build pulls the code from github and the .env file from a secure s3 bucket
  • code build performs building and testing
  • If all tests pass, a zip file of the entire project directory is generated and stored in a different S3 bucket
  • Code deploy gets triggered. It launches a new ec2 instance from an AMI (generated from the current running instance).
  • Code deploy extracts the code to the new instance
  • When new instance is ready, AWS starts directing traffic to new instance and stops any new request to old instance
  • when all requests are drained from old instance, it is shut down.

All this can be configured in AWS code pipeline and with 2 yml files. Everything else is automated. And there is no load / throttling of existing server.

AWS documentation may be a bit frustrating to get around. But once it is set, you'll have peace of mind.

Alternative is Github Actions. I never tried it, but it is much more widely accepted than AWS code pipeline... i think.

1 like

Please or to participate in this conversation.