The next generation of Enterprise IT is expected to be fully software defined to achieve the scale and efficiency demands created by third platform technologies. Compute, storage and networking vendors will have to migrate to technologies like cloud, Software Defined Storage (SDS) and Software Defined Networking (SDN) to ensure that their products stay relevant in the Software Defined Infrastructure (SDI) ecosystem. This paper examines the benefits of SDI, the gaps in emerging standards, as well as the new possibilities SDI offers for workload portability and the business opportunities for the new SDI ecosystem.
The Emerging Need
The migration from traditional, heavy iron to general-purpose x86 servers in the nineties resulted in large, complex heterogeneous data centers that were difficult and very expensive to manage. In the past decade, the emergence of cloud and virtualization, combined with dramatic cost reductions for data center hardware, have driven an evolution to large homogeneous, data centers which are comparatively easy and cost effective to manage.
Server virtualization technology has matured over the years and has achieved widespread adoption. While it has dramatically reduced the provisioning time for servers, network and storage provisioning is still time consuming. Moving forward, next generation enterprise IT architectures will need to be completely software defined in order to deliver the scale, stability, speed and efficiency required to support rapidly evolving enterprises. Software Defined Infrastructure (SDI) mainly drives this change. Currently, SDI is achieved through virtualization and orchestration; but network and storage virtualization and software-based orchestration still need to be standardized.
SDI Impact for OEMs
While SDI implementations will differ widely, there is one element that they all will have in common; and that is the hardware will be standardized. The compute server industry has already reached this level of standardization and the storage industry is moving in the same direction.
- Traditional OEMs, who make hardware-based, integrated hardware-software storage systems, will have to decouple their hardware and software to make the transition to software-defined storage where the hardware and interfaces are standardized. They may be loose some of their share of the hardware business due to open standards. They will also have to modify their software to conform to the open hardware standards, as well as have the required levels of interoperability and support required to be best-in-class in a multi-vendor ecosyste As a result, the primary challenge for OEMs will be to make the transition to becoming an Independent Software Vendor [ISV] as well as differentiating their products (now all software) in a standardized SDI environment. OEMs will also have to develop additional value adds such as enhanced performance and industry specific customizations to differentiate themselves.
- Semi-traditional OEMs, who today build a software stack that runs on standard x86 hardware, don’t touch the hardware and are, therefore, already vendor agnostic. They only have to do enough tweaking and inter-operability testing to be sure that their software stack works with the wide range of x86 hardware currently available in the marketplace.
- Transformers, today’s ‘out-of-the-box’ thinkers like Google and Facebook, are driving totally open architectures and standards initiatives like ‘Open Compute,’ ‘OpenStack,’ and ‘Open Storage.’ They define open hardware standards that can be integrated with SDI software to build customizable software defined infrastructures
New Opportunities Created by SDI
The heart of any SDI architecture is the software in the control layer that provides the orchestration, security, diagnostics and other tools required to make things work seamlessly. Here, there is a significant opportunity for new technologies and software to add value and gain market share.
- Security: The autonomic changes performed in SDI will need to be monitored and authorized to avoid security issues. As SDI enables applications to control the underlying infrastructure, new forms of security vulnerability could evolve. To safeguard such vulnerabilities, software defined security considerations will have to be implemented at every layer of the stack.
- Machine Learning and Analytics: SDI will require a significant amount of automation to make real-time decisions based on application performance and workload patterns in a data center. This will go way beyond traditional log collection and analysis and it will entail processing huge volumes of data in real time using state-of-the-art analytics and Big Data techniques. Artificial Intelligence and predictive analysis will play a big role in facilitating proactive intrusion detection. In addition to intrusion detection, there will be a significant opportunity for implementing self-learning, real-time decision support systems to manage the SDI compute, storage and networking layers.
- Diagnostics and Event Correlation: Log collection and analysis is a critical component in a robust SDI environment. Every change in the SDI environment should be authenticated, logged and audited. It is also important that the data store be able to capture all the log data from SDI layers in order to stitch information together for a root cause analysis. The output of the root cause analysis can be fed in to an event correlation engine to predict near-term hardware module failures.
- Integrated Management: There will be a disruptive change in infrastructure management software for SDI. Traditional discovery and monitoring techniques will not work because SDI blocks can be created on the fly and they can change their parameters in realtime. Hence, current scan-andfind approaches for device discovery will have very limited use. Inventory identification, topology discovery and logical views will all be dynamic. The orchestration layer will have to alert the infrastructure management layer about any underlying change in the SDI blocks for timely and effective module-related changes.
- New Standards–The ideal, fully-featured SDI end state is where the applications are able to specify their infrastructure requirements in terms of parameters like capacity/power, security, performance and availability. Once SDI applications have the ability to specify these parameters, they can be used by the orchestration engine to generate and apply the policies, and then by management applications to monitor an infrastructure’s behavior. They can also be used to troubleshoot application availability or performance issues and may be useful in mitigating security breaches. Currently, standards like OASIS-TOSCA are being established, but have very limited adoption.
Future of Workload Portability
The capability for an application to automatically and programmatically provision the infrastructure that it requires will open up a new dimension to workload portability. SDI building blocks will make it possible to migrate an application instance across data centers seamlessly.
Future applications will generate metadata specifying the infrastructure they need. The platform layer will decipher the metadata and use it to provision the required underlying infrastructure. North bound platform layer APIs will be standardized making it easy to migrate application components across data centers. The standardization will also enable administrators to migrate entire business applications across the data centers and clouds.
The final goal is to achieve the state of standardization where applications can be dragged and dropped across data centers – from private cloud to public cloud, or across public clouds. SDI and SDI applications’ ability to abstract and represent resource requirements will facilitate this powerful drag and drop functionality
We are still in the early stages of a full-fledged SDI model. SDN and SDS are two key SDI building blocks that are yet to be standardized. SDS has to reach the point where storage is application aware and an application’s storage requirement drives the provisioning and monitoring of the storage in a self-serviced manner. This is done under the Service Level Objectives (SLOs) and performance objectives specified by the application. In addition, the interfaces between the data services layer and data storage layer need to be defined, and implemented by the OEMs. The ability to drag and drop an entire business workload across different datacenters (public or private cloud) without any administrator’s interaction is the goal. This can only be accomplished with a comprehensive set of standards and a tightly coupled eco-system.
We have a long, yet very exciting road ahead. What part will you play in creating a fully featured SDI computing environment?