Latest news about Bitcoin and all cryptocurrencies. Your daily crypto news habit.
As James Markarian stated in his recent Infoworld article, “[when] it comes to the cloud, best-of-breed wins.” James also discussed how the quest for best-of-breed will drive multi-cloud adoption, giving the example of how a Microsoft-centric enterprise may choose Microsoft Azure for their apps, while choosing Google Cloud Platform (GCP) or Amazon Web Services (AWS) for data analysis jobs. Indeed, Microsoft recently reported 98% year-over-year growth in Azure revenues, which implies that AWS is no longer the only game in town.
The multi-cloud concept continues to come up in our conversations with forward-thinking industry professionals. But here’s a critical question that the industry still needs consensus on: With many companies choosing to run application stacks across public cloud environments that are potentially spread geographically, what is the right Internet-facing entry point into your application? More specifically, how does an API/microservices gateway, which is typically deployed in front of all of our microservices, evolve to meet the needs of multi-cloud applications?
Let me elaborate with an example:
- A modern application (app.company.com) happens to have two critical components: 1) An authentication service used to validate a registered user’s identity before the user can start using the service; and 2) An image upload service that receives an image and carries out deep learning to determine whether the image has a cat or a dog in it.
- The application owner wants to run the authentication service in AWS because they prefer to use RDS to store user account information, etc. They name it login.app.company.com.
- Separately, the application owner wants to run the upload service in GCP because they prefer to use ML Engine for image processing. They name it upload.app.company.com.
In this scenario, what is the right place to deploy the API/microservices Gateway?
- If the gateway is deployed in either GCP or AWS, you’ll be able to simplify service naming (login.app.company.com → app.company.com/login and upload.app.company.com → app.company.com/upload), but will see traffic tromboning between GCP and AWS, and your end customers will suffer poor performance.
- If gateways are deployed in both locations, you’ll solve the performance issue but will have to invent a complicated solution for single sign-on (SSO) between the two services, including secure distribution of private keys across clouds.
I would argue that in a multi-cloud world, the entrypoint (API/microservices gateway) should reside within a neutral/agnostic entity that is not tied to a geo-location. And having said that, I would further argue that the industry needs to rethink cloud-native architectures in the multi-cloud world.
At Rafay, we have developed a strong thesis on the right way to build out multi-cloud applications. If you would like to learn more about how Team Rafay can help you scale your applications across the globe to achieve improved performance, we’d love to talk to you. You can also sign up for updates about our company by clicking here.
Rethinking Cloud-Native Architectures In The Age Of Multi-Cloud was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Disclaimer
The views and opinions expressed in this article are solely those of the authors and do not reflect the views of Bitcoin Insider. Every investment and trading move involves risk - this is especially true for cryptocurrencies given their volatility. We strongly advise our readers to conduct their own research when making a decision.