Certainly! Deploying the same Laravel application across 250+ domains can lead to a lot of redundant files and high disk/resource usage. Here are several strategies to significantly reduce duplication and improve your setup:
1. Use a Single Shared Codebase with Multi-Tenancy
Instead of physically copying the codebase for each domain/instance, convert your application to support multi-tenancy. Laravel can serve multiple domains from a single deployment by detecting the domain or tenant at runtime (using packages like tenancy/tenancy), allowing you to share almost all code and resources.
Key advantages:
- Single deployment to update all instances.
- No duplication of code-related disk usage.
- Much easier maintenance (bugfixes and releases are instantly reflected for all domains).
2. Leverage Symlinks for Sharable Directories
If true multi-tenancy is not an option, another solution is to deploy each “instance” as a thin skeleton, with symlinks to shared code:
/var/www/my-project-shared/
├── app/
├── vendor/
├── bootstrap/
├── ...
/var/www/domain1/
├── .env <----- unique
├── storage/ <----- unique (user uploads, caches)
├── public/ <----- domain-specific assets
├── symlink to ../my-project-shared/app/
├── symlink to ../my-project-shared/vendor/
...etc.
Steps:
- Shared folders:
app,vendor,bootstrap,config, etc. - Per-instance folders/files:
.env,storage,public(if custom logos etc.)
Create the symlinks at deploy time. Laravel will follow the symlinks without issue, as long as storage/ and .env are writable and unique per domain.
3. Optimize Git and Vendor Usage
- Do not deploy the .git folder: It’s 87M! Use Forge’s "Deploy via Git" or CI tools like Envoyer/GitHub Actions to deploy the code, but exclude the
.gitdirectory after checkout. - Do not commit vendor/: Use
composer install --no-dev --optimize-autoloaderduring deployment to install production packages only. - You can even share the
vendor/directory via symlink if all domains are on the exact same code and dependencies.
4. Move User Uploads/Cache to Centralized or External Services
If possible, relocate user uploads (in storage/app/public) to an S3 bucket or similar. Cache and session can use Redis/memcached instead of filesystem, further reducing disk usage per instance.
5. Use Environment-based Config
Unique values per domain (like database credentials, app names, etc.) go in .env and optionally storage or a unique public folder. Everything else can be shared.
6. Sample Deployment Directory Structure
/var/www/my-app-shared/ # owned/updated by deployment CI/CD
app/
vendor/
resources/
config/
...
/var/www/instance-1/
.env
storage/
public/
# symlinks to ../my-app-shared/app etc.
/var/www/instance-2/
... repeat ...
Sample symlink commands for deployment:
ln -s /var/www/my-app-shared/app app
ln -s /var/www/my-app-shared/vendor vendor
ln -s /var/www/my-app-shared/config config
...
7. Summary Table
| Folder | Shared/Symlinked | Unique/Per-Domain | Notes |
|---|---|---|---|
| app | ✅ Shared | ||
| routes | ✅ Shared | ||
| bootstrap | ✅ Shared | ||
| vendor | ✅ Shared | ||
| config | ✅ Shared | ||
| resources | ✅ Shared | ||
| .env | ✅ Unique | Unique to each domain | |
| storage | ✅ Unique | User/files, cache, logs | |
| public | ✅ Unique | Shared or per-domain assets |
8. Automate with Deployment Scripts
Update your Forge or deployment script to:
- Prepare symlinks for shared resources.
- Deploy only essential unique files (like
.env).
TL;DR Actionable Steps
- Convert to multi-tenancy (best), or symlink all shared folders between instances.
- Never deploy .git or vendor, unless using symlinks.
- Keep only
.env,storage/, and, if needed,public/per-domain. - Centralize assets/uploads if feasible.
This setup can reduce disk usage per domain from 258M down to ~1-2M.
Let me know if you want example scripts or further clarifications!