Enterprises are embracing cloud native thinking as one of the strategic principles in their digital transformation journeys, and they are now extending this concept of a cloud first strategy to embrace a multi-cloud deployment model where workloads are seamlessly managed across multiple cloud provider platforms.
While multi-cloud adoption offers interesting business benefits, ranging from cost savings to providing high availability and uninterrupted business continuity; it also introduces complexities in terms of package compatibilities, deployment and service dependencies. However, with almost all the public cloud providers offering some form of Kubernetes support, multi-cloud adoption has become a truly realistic model today, provided the services are designed to leverage Kubernetes as the underlying orchestration platform. In this article, we focus on this very thought process called Kubernetes Native design thinking. It is a design philosophy with Kubernetes as the de-facto underlying foundation block on which solutions are deployed.
While the cloud native design principles work perfectly in a single cloud setup, seamless deployment of these services across multi-cloud infrastructure offers several challenges, especially in case of cloud-provider agnostic solutions. To be fully cloud provider agnostic, we need to revisit deployment aspects of the cloud native services.
Key challenges with migration of services across cloud providers are:
- Application Configuration
- Platform Capabilities
- Service Mesh
- Discovery services
- Registry services
- Monitoring services
- Security Policies
- Network management
- Network overlay configurations
- Storage management
- Auto-scaling support
- Release pipelines
By following Kubernetes native design thinking, we can overcome these challenges to seamlessly migrate services across clouds. We can decouple the services from cloud provider dependencies and ensure that migration of services across clouds is seamless and less time consuming.
Let us look at some key design aspects to be considered while adopting Kubernetes native design approach:
- Application Configuration – Load the application configurations, such as environment parameters, application specific variables through ConfigMap Kubernetes objects. This ensures that configuration details are externalized from application packages and hence easy to migrate across clouds.
- Service Mesh configurations – Ensure that the routing policies, sidecar capabilities, network access policies are provider agnostic. Most cloud providers have service mesh as a service as due to their ability to abstract access controls and service runtime governance as well as manageability. However, it is possible that each cloud provider has a different service mesh which can make the routing policies and network access policies incompatible.
- Discovery services – Service discovery is critical for communication with other applications. If external tools like Consul are being used, make similar discovery tools available in the target cloud environment. We can use etcd for service discovery so that the discovery is seamless, post-migration to the target cloud. Ensure that all service endpoints are managed by Kubernetes. If there are headless services, we must re-package the service deployment configurations to be managed by the target cloud load balancers.
- Image Registry services – Image management is usually provided by cloud providers. We need to migrate images to the target cloud provider registry (if cloud provider services are being utilized). This can be done as part of Image migration process. Leverage ConfigMap object to store the Image registry details which can be referred by the deployment configurations across cloud providers. We can also use InitContainers 2 to read the initial deployment configurations at the start (post migration to target cloud providers).
- Monitoring services – Cloud providers have different monitoring agents (fluentD or Elastic Beats or other sidecar agents) which push log events to a centralized monitoring platform. Ensure that all services are streaming logs through sidecar agents along with ConfigMaps to point to the target monitoring platform.
- Network Policies – Kubernetes requires network overlay modules to manage internal communication between pods. We can leverage different network overlay modules such as Flannel, Calico, Romana, etc. The granularity of network policies on the resources depends on the features provided by the selected overlay modules of the respective cloud provider. Define policies which are not overlay module dependent.
- ·Storage management - Persistence is critical for state management of services. Design services to leverage 3Persistent Volumes and Storage Classes through Persistent Volume Claims. Ensure that the services do not read or store the logic directly onto local volumes as part of the design. Leverage Container Storage Interface supported volumes. Pre-provision these PVs, Storage Classes and PVCs as part of services migration onto the target cloud provider.
- Autoscaling support – One of the key objectives of using a container orchestrator like Kubernetes is to leverage the core platform capabilities such as scalability, resiliency and availability, for the services deployed. So, leverage the autoscaling features of Kubernetes and design the services instead of implementing the same through external feedback mechanisms (such as monitoring platform integrated with cloud infrastructure).
- Deployment pipeline management – It is imperative that we revisit the deployment pipelines for target cloud infrastructure. So, ensure the deployments are done through Helm scripts or Kubernetes Operators to insulate the deployment infrastructure from the actual deployment pipeline. Lastly, externalize the target infrastructure details to seamlessly manage the target infrastructure platforms.
Cloud native designs enable services to be cloud ready. Kubernetes native design thinking will enable enterprises to migrate services with minimum impact and efforts while maximizing the benefits of multi-cloud deployment model. It helps abstract inherent cloud provider dependencies so that true multi-cloud adoption is possible and enterprises can realize business benefits, such as enhanced business continuity, reduced cost of ownership, reduced vendor dependency, improved scaling, etc.