Picture this. The Head of Operations of a large agricultural equipment manufacturing company receives repair requests from more than 70% of its customers due to an unexpected flood. Despite engineering weather-tolerant products and training users to protect their equipment from adverse conditions, users failed to minimize the impact on their equipment. Concerned about plummeting customer satisfaction levels, the Operations Head wants to introduce ‘preventive care’. The idea is to deploy sensors on the equipment to capture information about the current position and altitude of the equipment, and whether the equipment is in a closed shelter. This would help the company correlate weather alerts for a given location with the information from the equipment to send proactive and personalized safety notifications to the owners.
The Operations Head engages the IT Head, whose two-pizza application development team begins an experimental project. They research available sensors, data formats, and communication media that can connect to a public cloud. Unsure of all the business requirements initially, they build the operable software iteratively. They choose to develop a cloud native application, built on change-friendly serverless architecture, leveraging the public cloud’s IOT Suite services for exposing APIs, data ingress, and analysis. They develop the deployment automation using infrastructure as code. After iterating for about three months, they introduce a working model as a pilot project in one of the flood-prone areas. The solution works beautifully, resulting in a marked improvement in customer satisfaction levels.
Did you notice that the CIO or the infrastructure team did not figure anywhere in this picture? That’s because the Operations Head and the experimentation team didn’t need to connect to any of the existing applications in their corporate datacenter, nor did they need help to build and manage infrastructure. The public cloud provider offers the serverless components through one-click provisioning, and the architecture has the ability to scale up or down using containers automatically. Containers are by definition ‘immutable’ and don’t need patching, updates or ongoing management.
So, why would businesses still need IT infrastructure services when the infrastructure itself is becoming invisible? Is it simply to manage their legacy applications? What if all the legacy architecture is modernized with cloud native architectures—would infrastructure services become redundant? The answer is a resounding ‘no’.
The role and relevance of IT infrastructure services in serverless architectures
Let’s deep dive into four key reasons why IT infrastructure services will continue to stay relevant for microservices or serverless architectures.
1. Addressing architectural complexity
The evolving application architecture is beneficial to product managers, release managers, developers and testers as it helps them develop applications iteratively and release features frequently. However, the enhanced flexibility comes at a cost—that of increased operational complexity (see Figure 1). The oldest monolithic architectures consisted of a single piece comprising all components required to operate an application; followed by client-server architecture, with server and client applications released as separate packages. This morphed into 3-tier and n-tier, depending on the complexity of the application, but the data path was still controlled and imaginable. Finally, the serverless architectures evolved, complicating the data path significantly with innumerable data exchanges and cross-overs.
Figure 1: Evolving architecture and growing complexity
2. Going the distance - from innovation to stability
Microservices architecture fuels innovation by enabling enterprises to iterate, fail and recover fast. After recovery, a business process reaches maturity and moves into the ‘stability’ phase. During the innovation phase, there is more appetite for accommodating higher costs and the scale is typically lower and easier to manage. However, in the stability phase, where the scale and operating costs are higher, tooling and troubleshooting can get complicated, besides data persistence and consistency taking a severe hit.
3. Enabling business process integration
For a ‘born-in-the-cloud’ enterprise, integration is built in right from the beginning. However, for enterprises running a variety of architectures such as mainframes, legacy Unix, Windows/Linux or monster systems of records, it takes time to refactor or rebuild applications into cloud native formats. The integration of emerging applications with the traditional systems introduces a new API service exposure layer. This breeds a complexity very similar to that of SOA and Enterprise Service Bus deployments in the past.
The integration problem applies to the management and operations toolset as well. How do we integrate identity and change management, incident integration, backup, restore, security systems, etc., across different architectures of infrastructure, platforms, and applications.
4. Enabling costing and accounting
The calculation of Total Cost of Ownership (TCO) can get complex in serverless architectures as it is not the investments, but the usage patterns that determine the TCO (see Figure 2). In contrast, TCO calculation for server-based apps takes into consideration the cost elements, annualization of the CAPEX, the addition of OPEX, and finally the deduction of annual TCO—a process the industry is familiar with. Additionally, in serverless architectures, the TCO will vary across years, which further complicates the financial projections, modeling, and planning.
Figure 2: TCO calculation in server-based and serverless architectures
Support the move to serverless computing with new IT infrastructure services
To meet the demands of the digital world, businesses are undertaking IT modernization by adopting cloud native computing. In the process, some traditional infrastructure services that are no longer relevant for cloud native apps will disappear and new infrastructure services—either manual or bot-driven—will take their place. Some amount of environment specific automation may also be required. While some tooling is readily available to support the native cloud environment, many more will need to be custom-built. In a serverless world, the new infrastructure services would require tight integration with financial governance to avoid spend leakage, enable revenue-linked cost management, and drive financial accountability across the organization. Leveraging infrastructure services can help you deliver scalable and resilient services using serverless architecture without reinventing the wheel.
Govindaraj Rangan
Practice Director – Cloud Transformation Services Global Infrastructure Services – Wipro Ltd.
Govindaraj Rangan has 21 years of industry experience across the breadth of the technology spectrum−Application Development to IT Operations, UX Design to IT Security Controls, Presales to Implementation, Converged Systems to Internet of Things and Strategy to Hands-on.
With his zeal to remain state-of-the-art, he is quick at developing deep hands-on expertise in emerging technologies and applying in real customer scenarios. He is currently working on solutions to cloudify Enterprise Datacenters, expanding their boundaries into public clouds, experimenting with IoT and robotics to build the Datacenter for the Digital Era. He has an M.B.A. from ICFAI University specializing in Finance, M.S. in Software Systems from BITS Pilani and B.E. (EEE) from Madras University. Reach out to him on govindaraj.rangan@wipro.com