December | 2019
This article discusses:
The distributed processing needs and the challenges of creating dynamically federate resources in an Industry 4.0 environment that has begun to extend beyond traditional datacenters and cloud.
The world is being closely monitored, analyzed and controlled. Humans are talking to humans via mobile, email, social and instant messengers; humans are talking to artificial things, such as their homes, vehicles, websites and industrial gear. Artificial things are talking to other artificial things – for instance, a washing machine asking a remote server for a new wash cycle. But you didn’t think that cows would be texting humans, did you? A startup in Austria placed sensors in cows’ stomachs to transmit health data via Wi-Fi to identify when they are ill, when in heat and when pregnant . That’s an example of artificial things, natural things and humans embedded within a boundaryless compute fabric that extends well beyond offices and datacenters. In many ways it exemplifies Industry 4.0 that is opening the doors to new value generation opportunities.
Industry 4.0 is the trend towards data exchange between cyber-physical systems (CPS), the Internet of Things (IoT), Industrial Internet of Things (IIOT), cloud computing, cognitive computing and Artificial Intelligence (AI). Using IoT/ IIoT, CPSs communicate and cooperate with each other and with humans in real-time. Technologies, such as cloud computing, cognitive computing, analytics and Artificial Intelligence (AI) are sandwiched in between, creating the ability to sense, analyze and control the interactions, defining the capability of an enterprise in the world of Industry 4.0.
By recombining these technologies, we are inventing exciting new ways to sense the world around us. These technologies work silently and unobtrusively, 24X7, listening to events, using APIs and algorithms to generate intelligence and initiate action.
The examples of creating new value via these technologies abound. These include capturing images from a remotely controlled drone over an oil and gas pipeline and allowing a cloud-based computer vision service to analyze them and generate automatic alerts for damage; getting Alexa to find a meeting room in office ; connecting your screen/s to a Netflix subscription and the subscription to your bank account; or even leveraging real-time data to control illness and breeding in livestock.
Towards the ‘Digital First’ model
These are delightful examples of what can be made possible in a digitized world. With the advent of ultra-reliable, low latency 5G networks in 2020, digital investments will begin to harvest benefits that are several magnitudes larger. However, to realize the new capabilities, infrastructure services must take their automation capabilities to the next level. Extremely large volumes of data must be processed in real time. Intelligence must be pervasive—extending from edge devices to networks and applications.
In addition to pervasive intelligence, decision management will acquire a new flavor. Humans will be required to continuously make strategic interventions in terms of tweaking models, advancing algorithms and changing product mix, etc. The platform requirement, compliance and network characteristics will set new information processing demands for the consumption of public cloud services, information processing in private secured data centers and at locations that are proximal to users.
Organizations must master these elements. Once they do, they will be able to pull off feats such as influencing an individual to buy add-on accessories and services. They could easily deliver fully customized products. Others could stop a pest attack on farmland or increase the success of livestock breeding. The list of such amazing outcomes is literally infinite.
Hybrid computing fabric for the digital era
In terms of technology, this means that the future landscape will essentially comprise a distributed computing fabric that stitches an enterprise’s data centers, edge and the public cloud into a single, logically manageable entity.
The compute fabric is then able to deliver the distributed processing needs of an Industry 4.0 environment. The computing fabric will be closely aligned to the value stream, addressing the demands of customers, partners, stakeholders and regulators, with proactive data and signals maintaining the desired state of the system (see Figure 1).
Figure 1: Hybrid computing fabric for the digital era
Underlying the distributed environment will be the challenge of maintaining a variety of IT and OT KPIs (such as performance, capacity, SLA, alerts and compliance). This will not be easy, given the scale and the mission critical nature of operations and systems amidst the rise of new threat vectors. That is why it will be necessary to ensure that humans continue to play a significant role while automation and analytics are used to support and supplement human decision-making.
The key is to strengthen the hardware and platforms with options to include advanced instrumentation, telemetry and standards to derive signals for processing. This can be further complemented by sensors and tools, the application of AI and the use of data science techniques for closed-loop operations like self-healing.
Mandatory ingredients for the boundaryless compute fabric
What are the mandatory elements for a boundaryless compute fabric that matures beyond cloud-centricity? How can this new fabric dynamically federate resources that could have fuzzy ownership, avoid latencies and co-ordinate fast moving output from a number of analytical nodes without compromising security? Here are some pointers (see Figure 2):
Service brokerage and federation: The dynamic multi-cloud and other 3rd party services that build global-scale modern systems require tools and an operational model that provide advanced brokerage functionality and can drastically cut down decision times and fulfillment requirement of business.
Leveraging public networks: Traditional networks like Multiprotocol Label Switching (MPLS) and point-to-point links are a handicap as there are new traffic patterns being generated by datacenters, cloud, edges and end users. Technologies like SDWAN that use the Internet as the communication medium solve many challenges at attractive price points. And with 5G around the corner, organizations will have another compelling option to traditional networks.
Utility Model: The seasonal aspects and cyclical spikes in business makes it difficult for products and platforms to stay sharply aligned to demand and customer consumption patterns. The capex pressure this model implies is not sustainable. Businesses therefore need to maximize public cloud and other IT building blocks that can deliver elastic operational and technology stacks in a utility model.
Defense in Depth (DiD): Mission critical infrastructure is highly likely to become a target for attacks. This is more so with protocols like HTTP and UDP getting used that make it difficult to identify and respond to all the threat vectors. DiD becomes indispensable to ward off the threats, and investment in DiD will, therefore, always be in the consideration mix.
Containers and microservices: No business can operate without being part of the API ecosystem. They must consume and/ or serve APIs. Existing applications are today getting reengineered or new applications built from grounds up to leverage containers and microservices based deployment. This approach is highly sought after because of the seamless agility and modularity it provides in the application lifecycle management, and therefore becoming the favored approach to modern API development. Containers and microservices management platforms like Cloud Foundry, OpenShift, Kubernetes, etc., now find an essential place in the architecture building blocks because of the lifecycle needs.
Boundaryless Cloud Integration: Innovation powered by Kubernetes created new design and deployment patterns of applications that provide seamless redistribution of service across datacenter, clouds and edges without any modification of the applications. Transformation of network and security layer becomes crucial to realize the full potential of this innovation.
Platform optimized Datacenters (PoD): Big Data and data lakes are a reality. This has resulted in technologies like Hadoop, Spark and Storm making big inroads in the area of streaming analytics, delivering unprecedented value. But it also means balancing the cost of handling the explosion in data. That is why organizations are trying on-premise, then cloud and are going back (perhaps to another cloud). Organizations like Facebook and Google have solved the engineering challenges by building their underlying compute optimized for the platform it serves, known as PoDs. Similar PoD-based engineering is central to balancing price versus value and Kubernetes is emerging as one of the top solutions.
Intent-based networks: The evolution of network technologies have seen the realization of the Intent-based network that injects business context into network characteristics. This simplifies operational characteristics and reduces the need to tweak networks.
Intent-aware and AI operations: We will, at some stage, realize that current tools and operational process are unable to meet emerging requirements. DevOps, DevSecOps, DataOps are highly handicapped without right tools. However, it is possible to revamp tools using AI, allowing organizations to let go many statically configured rules, thresholds, actions and assignments. AI Ops are here to stay.
Sustainability: Finally, sustainability cannot be set aside. To make business sense, the boundaryless compute fabric must be sustainable, frugal on the resources it consumes across its value chain.
Every organization will be touched and transformed by the forces of Industry 4.0. It is natural for organizations to advance to a point where humans, natural and artificial things are communicating and working in perfect harmony to deliver the exciting new products, services and efficiencies of tomorrow. To reach that point, Industry 4.0 strategy and the idea of a boundaryless compute fabric must evolve today.
Saji Thoppil
Chief Technologist - Cloud and Infrastructure
Saji Thoppil is the Chief Technologist for Cloud and Infrastructure at Wipro. He drives the Edge Computing charter under Wipro's 5G initiative. Use cases and lifecycle management are his focus areas. Saji is in the governing board of LF-Edge & Oasis TOSCA.
He has 25+ years of IT industry experience encompassing design, build and operationalization of complex, distributed IT systems. In recognition of his contribution to the organization and the industry, he was conferred the title of Wipro Fellow - Distinguished Member of Technical Staff. During his distinguished career, he has created multiple practices and incubated several new IPs for Wipro. Wipro's Fluid State Data center designed and developed by Saji was one of the industry's first blueprint for converged infrastructure.
Wipro developed a Cloud roadmap for a global leader in digital interactive entertainment, enabling the client to shorten the time-to-market and lower costs by $4.04 million.
Wipro helped a global educational publisher achieve 30 percent cost savings through a Unified Cloud Management Platform
Wipro enhanced scalability and optimized the Cloud ecosystem cost for a leading education software provider with Salesforce CRM.
© 2021 Wipro Limited |
|
© 2021 Wipro Limited |
Engineering, Construction & Operations
Pharmaceutical & Life Sciences