Utilising caches and mutual exclusion, AWS
Hi all, I'm currently working on a project for large scale IoT. There is a lot of data processing that can require heavy cache usage and lots of shared mutexes (locks). Because I really wanted to use Horizon to visualise the state of all of the job processing, this has of course locked me into Redis as my cache driver, because I am also using critical locks, it must be a central server. This pushed me to ElastiCache for ease of use and hopefully scalable performance however, is it the most expensive thing ever made! I can't believe the price for a basic Redis cluster, it's practically double the cost for my auto-scaled EC2 groups.
Do I need to be so fixated on Horizon? I understand that it can auto-balance my queues but how is this different to just running everything on the default queue and having multiple workers equal to the total number of workers across the queues I've configured in horizon? I can live without the horizon front-end.
If I dropped horizon I could move to SQS for the queues and perhaps DynamoDB for the mutexes, I am using DyanmoDB anyway and appreciate it's power. Is this the best option for performance when it comes to doing mutual exclusion across a multi-node load-balanced deployment? If not, what is the best option?
I'd appreciate some insight and suggestions from you all, the crux of this system is the job/mutex performance and remaining scalable.
Also, AL2023 (Amazon Linux 2023) does not have supervisord LOL! It is apparently redundant with systemd so my deployment process would have to create X systemd services which is messy and I'd prefer to use some other process manger to control the number of workers, which further pushed me onto horizon.
Thanks to anyone that reads!
Please or to participate in this conversation.