What is the best approach for using Redis and Horizon on multiple server setup?
We currently have our app installed on a single instance and about to deploy it to a load balanced infrastructure with 3 nodes.
This means 3 deployments of the code base, sharing a single database.
Should we delegate one node for queue management and scheduled tasks (one Redis db) or allow each server to independently manage their own queue (3 Redis dbs)?
I feel like using one REDIS DB would 'cleaner' .. but if that node ever went down it would prevent the app from functioning properly... so probably better to allow each server to manage their own queue?
Faced with the same problem, this issue really drove our selection of cloud provider. We built a VPC on AWS with multiple, load balanced application servers connected to a clustered ElastiCache Redis. Without knowing more about your workloads and your requirements for reliability, I can't really opine further other than to say you don't horizontally scale with multiple, independent nodes. You need a single node you run on a dedicated instance or a managed solution like ElastiCache.
I wouldn't run your Redis on one of your app servers. While running your own Redis instance means you can defer and control the timing of maintenance, you'll have a failure or need to upgrade software or etc. Then you'll have to consider the data in your cache. Do you need to migrate it? Using a managed service like ElastiCache eliminates those issues, but doesn't free you from having some maintenance down time. We tested with a single Redis instance but found out that availability can be affected by the periodic maintenance AWS does. We have a critical path where we can't accept minutes-long interruptions so we run in clustered mode. They also recently improved availability when in clustered mode to allow writes to continue while maintenance is occurring. That said, during our test period of 8 months, we only had one maintenance event that made Redis unavailable for nearly 10 minutes.
You may have no other choice for hosting/hardware, but if you do consider AWS, they have a ton of reference material. Let me know if I can be more specific about anything.
Hey @lindstrom, we are attempting a similar configuration on AWS using multiple app servers to process queues that all connect to an ElastiCache (redis) endpoint. We're running horizon on each of the app servers. However, we're having a heck of a time getting it to work correctly. The horizon dashboard shows all of the machines, supervisors, queues, and jobs, but it just won't process them! They sit in the queue in "paused" mode and never process. Are you guys using horizon? Any idea what the issue could be? I've tried using a unique redis prefix for each machine, and also using the same redis prefix for each, but with the same results. What is the proper configuration for horizon to be run on multiple servers all using the same redis db?
Sorry -- just saw this. Hope you have it sorted. The only time you'll see "Paused" on the Horizon dashboard's status is if you explicitly run php artisan horizon:pause.
We use Horizon on Forge-provisioned app servers. We have a daemon that keeps horizon running. When we deploy, we call php artisan horizon:purge && php artisan horizon:terminate. The daemon restarts horizon so it gets the freshly-deployed code.
There isn't anything special in our config. We don't prefix to each machine. Just bear in mind that the number of processes you assign will be multiplied by the number of app servers you have running. We set anything scheduled to run on one server.
Let me know if you figured it our or if I can offer anything else to help. Someone should really write a comprehensive tutorial on best practices for Horizon horizontal scaling. It's confusing for sure.