I am building a multisite CMS with laravel and I want it to be scalable across multiple servers. The biggest problem with this is that the CMS will create site specific folders that will contain php files. If the CMS is creating these folders, it will be difficult to replicate the changes across all servers consistently, so I am looking into using S3 as the location for those site-specific folders. I know there is a way to use a stream wrapper for S3 (http://blogs.aws.amazon.com/php/post/TxKV69TBGSONBU/Amazon-S3-PHP-Stream-Wrapper).
Would this be a good way to accomplish this? Will there be performance issues with the server running php files off of s3? Is there a better method?
Essentially, what I am looking for is a way to offload the site-specific folders from the webstack so that no replicating needs to be done when servers are booted or sites are created.
Well it's possible to remotely get and execute PHP files, but I'm sure you'll know what we'll all gonna say about that :D
Network round-trips per PHP request will slow the site down (unless you download the files once and save them, rather than on every page request)
Allowing remote PHP file inclusions is a big potential security hole
I don't really know your application or server setup, but there are options such as using rsync and perhaps a CRON task to update files periodically, or a cron task to download PHP files from s3 and save them to the server, rather than have PHP code responsible for pulling down PHP files on each web request.
@fideloper, We will be using Amazon EC2 and S3. The files will only be accessible by those servers, so I wasn't worried too much about the security risk (unless I'm missing something). I did realize that the round trips will cause a little lag, but Amazon does boast very low latency if you stay within their services (have not done any testing on that as of yet). But nevertheless, I would implement some sort of caching. The files will mostly be things like view templates. I have also thought about things like rsync and cron jobs and queues. I just liked the idea of using something like S3 because everything is centralized. I am just looking for the best possible solution.
@thepsion5, A simple solution for creating and editing files and directories in a multi-server environment where the number of servers is unknown. When a file is created/updated/deleted, the changes need to be replicated as real-time as possible across all servers.
OR
the files and directories can be in a centralized location that is accessible to all servers in a way that they are executable (like a mounted drive).
Even "minimal" latency of a few ms would be absolutely awful for your server performance. You should instead consider just syncing the directories from a master server (or S3 if you really want) to each server's local storage and execute from there. Use a script to either check for updates on a regular schedule, or set up a URL to allow push notifications from the master server when there's an update to sync.
Instead of generating php files you should store all the configuration on a database server and have php "build" what these files originaly do trough these settings.
If you really need to have generated files you should look into shared local storage, something like a NAS or SAN system that can run over iSCSI. Some VPS providers can do this for a good price, if you have your own serves set up it can be a bit pricy with a SAN if you can't afford it, In a case like this a NAS would be your best bet.
This reqires that all servers are on the same local network tough! You cold do remote sync but it becoems even more expensive having multiple server sites.
A simple solution for creating and editing files and directories in a multi-server environment where the number of servers is unknown. When a file is created/updated/deleted, the changes need to be replicated as real-time as possible across all servers.
This seems like a problem best solved with a Continuous Integration service like Travis CI. You run a build, and if the build passes you run a task on each server that puts the apps into maintenance mode, pulls the latest codebase from git, dumps autoload, does whatever else is necessary. Version Control should be your canonical source for your codebase, files loaded from a single remote server.
@ pstephan1187
Hi there...
I was also hunting for a similar solution. Did you get any. I experimented following and it worked. But I guess would not give me the required performance and scalability. What are you views?
| S3 | = > mounted bucket on | EC2 | using s3fs => Apache HTTP with mod_php serves the pages
so in this case I have all the folders at | S3 | bucket
| EC2 | has the Apache HTTP
I am mainly worried about mounting S3 on every instance that would come up.
regards
What you should have is your application code running in an EC2 instance. If you need to handle heavy traffic, then you’d set up a load balancer and replication.
As a fellow AWS user thanks for the post. These will show up just like NFS mounts. I can push to S3 bucket and my EC2 instance can access this. Been wondering about this. Any 'Network' latency will depend on your AWS regions and the remote file system. This is same as any virtual file system I have worked with.
I agree with @thepsion5, I would have something like TravisCI, CodeShip or Shippable to trigger build/test and then ship to your servers. TravisCI has a provider for codedeploy, see here http://docs.travis-ci.com/user/deployment/codedeploy/, it should be possible to achieve the same with other CI's.
Store your resources "images, js, css etc..." on your S3 Bucket and use CloudFront "CDN".
You can even save your pages as static html files from S3 Buckets if you wish to, there is a good cast on here https://laracasts.com/lessons/caching-essentials, change the storage location to your NFS mount "S3".
You still need to deploy your code to multiple servers, but at least you keep your application logic on the application server "Apache/Nginx", and have your resources/html static files served from the CDN.