Picture this. The Head of Operations of a large agricultural equipment manufacturing company receives repair requests from more than 70% of its customers due to an unexpected flood. Despite engineering weather-tolerant products and training users to protect their equipment from adverse conditions, users failed to minimize the impact on their equipment. Concerned about plummeting customer satisfaction levels, the Operations Head wants to introduce ‘preventive care’. The idea is to deploy sensors on the equipment to capture information about the current position and altitude of the equipment, and whether the equipment is in a closed shelter. This would help the company correlate weather alerts for a given location with the information from the equipment to send proactive and personalized safety notifications to the owners.
The Operations Head engages the IT Head, whose two-pizza application development team begins an experimental project. They research available sensors, data formats, and communication media that can connect to a public cloud. Unsure of all the business requirements initially, they build the operable software iteratively. They choose to develop a cloud native application, built on change-friendly serverless architecture, leveraging the public cloud’s IOT Suite services for exposing APIs, data ingress, and analysis. They develop the deployment automation using infrastructure as code. After iterating for about three months, they introduce a working model as a pilot project in one of the flood-prone areas. The solution works beautifully, resulting in a marked improvement in customer satisfaction levels.
Did you notice that the CIO or the infrastructure team did not figure anywhere in this picture? That’s because the Operations Head and the experimentation team didn’t need to connect to any of the existing applications in their corporate datacenter, nor did they need help to build and manage infrastructure. The public cloud provider offers the serverless components through one-click provisioning, and the architecture has the ability to scale up or down using containers automatically. Containers are by definition ‘immutable’ and don’t need patching, updates or ongoing management.
So, why would businesses still need IT infrastructure services when the infrastructure itself is becoming invisible? Is it simply to manage their legacy applications? What if all the legacy architecture is modernized with cloud native architectures—would infrastructure services become redundant? The answer is a resounding ‘no’.
The role and relevance of IT infrastructure services in serverless architectures
Let’s deep dive into four key reasons why IT infrastructure services will continue to stay relevant for microservices or serverless architectures.
1. Addressing architectural complexity
The evolving application architecture is beneficial to product managers, release managers, developers and testers as it helps them develop applications iteratively and release features frequently. However, the enhanced flexibility comes at a cost—that of increased operational complexity (see Figure 1). The oldest monolithic architectures consisted of a single piece comprising all components required to operate an application; followed by client-server architecture, with server and client applications released as separate packages. This morphed into 3-tier and n-tier, depending on the complexity of the application, but the data path was still controlled and imaginable. Finally, the serverless architectures evolved, complicating the data path significantly with innumerable data exchanges and cross-overs.