I normally use webhosting for monolith webpages. Now I have a gateway (where user login and do all request with access token) and microservices. When I started, i used to forward the user id for each request to check on microservice again, if user is the owner. But I already do it on my gateway. Now I refactor it and saw that forge (i never worked with forge and cloud hosting) have network options where i can set rules. Does that mean i can have 3 servers (1 Gateway/2 Microservices) on one hostingpartner / region and then connect then and then say that only my gateway can access the microservices? That would make it easy, because i can only send the request with the $id of the data, which should be changed and do all checks before on my gateway. Want to be sure before do a lot of refactoring. That is not the only security thing I added (I know it is not really security, but i added it...). Does forge can do it? Hide my microservices from public and only be available for gateway?
Ok. I read some other blog posts. It seems I'm on the right way. Gateway should handle everything, to check if user is logged in and permissions are ok. I use policies. The data is still in microservice. Also the info, of the user id. So I have to get this in my gateway. To be sure, that I dont load it everytime again from microservice, I build a caching system in my repositories.
Also forge seems to support that network feature, like I need it. The enduser can only access the gateway and i can setup my microservices in the network via forge. I will get an account, when I have my beta ready.
@pixelairport It sounds like you need to read a little more about microservices before deciding it’s the way to go for your project. Because very few projects actually benefit from microservices, and if implemented incorrectly or applied to a problem that doesn’t need them, they can cause more problems than they intended to solve.
@martinbean I have not everything as a microservice, but maybe you are right. And now I'm confused again, if I should refactor everything to have one single application instead. I first thought it is good for scale and to have a faster application to scale. Also liked the idea, that every team later can have their own technology. But that was not the main goal. It must be fast. And I had other post, where I got tips. And now I use Redis on some parts. Would you say it is better to have a monolith and do it more in "Domain Driven" (hope that is the right wording) way? I mean... do I just make everything more complex, whitout having the
advance of better scalability? I need to check before refactor everything, because at this time everything works local. But I also dont want any disadvantage later.
Maybe an example: I have an app with users and scoring. The different games on my platform is each an different microservice, that all can work on their special game with their technic.
@pixelairport Well if speed is what you’re after, how is chaining multiple HTTP requests going to make anything faster? That’s going to make things slower. You’re going to have the initial HTTP request into your “main” application, then the user has to wait for that application to make another HTTP request to some “microservice”, get the response back, and send everything back to the user.
@martinbean ... after thinking about everything, read posts I decided to refactor everything and move my microservices to a monotlith. I talked to a friend, who is devops, who said I only should be careful to have my microservices as modules in areas which are independent to move it later back to microservices. That sounds best for me. He also said it would be more expensive, because i need more server. Also i have some benefits to have all in one app. So you are right... pros are more than contra... I just have to think about the structure to have independent parts. Thx for your input @martinbean