Programmable Infrastructure: Traditionally, when business needed an addition of, say, 10 servers, the process included procurement, installation, configuration, testing and roll out. This could take weeks, if not months. Today, using virtualization, 10 virtual servers can be deployed in minutes. This is possible when components of the infrastructure - such as compute, network, storage and associated services such as backup, archival, DNS, Application services, etc. - are programmable. Every Datacenter component must be exposed through APIs that allow various components to talk to each other and that allow administrators to orchestrate and make changes faster. This is a shift from the traditional manual and time consuming process. It forms the basis of a Software defined datacenter with configuration and state monitoring programmability.
Pervasive Fabric: Future datacenters will be Hybrid in nature. Such datacenters necessitate a single pane of glass for provisioning, migrating, managing and monitoring the infrastructure and applications deployed across different clouds, providing workload portability and making the network and service location immaterial to the end-user.
Web-Scale Agility: Companies like Facebook, Google and Microsoft make over 200 changes per day to their applications. All enterprises will want the same web-scale agility within their datacenters. A DevOps approach of automated development, testing and release management along with micro services-friendly infrastructure becomes necessary to bring this web-scale agility.
Always-On: Traditional disaster recovery methodologies will fail to meet the emerging demand for 'Always-On' applications. Always-On architectures are prohibitively expensive, but 'pseudo Always-On' environments present fresh opportunities to minimize recovery time objective (RTO). Technologies like Microsoft Azure Site Recovery help replicate data in near-real time, and recover compute capabilities in close to zero seconds. On the other hand, micro services-based applications provide speed to port workloads across multiple datacenters and bring up services in the shortest possible time.
Cognitive IT: Datacenters will add intelligence to their systems so that they develop the capability to learn and auto-scale to meet changing business demands, predict failures and self-heal. Pattern recognition and machine learning technologies will enable this development, now possible with the availability of event and performance data and the reduction in data storage costs.
Security: Security concerns have been the biggest inhibitors to expanding datacenters. Fears around compromised security have grown further as compute perimeters are breaking; but once new virtual boundaries are mapped and enterprise defined in-depth software policies are in place, security policies can be easily applied to objects irrespective of where the object physically resides in public or private clouds.
Financial Accountability: Businesses will want to consume datacenter services on a utility based OpEx model with a swipe of a card. This means creating shared infrastructure with the ability to cross-charge business units based on usage. As a consequence, CIOs will become financial more accountable and need to build Business Value Realization models. Newer KPIs will emerge such as 'Revenue per unit of Compute', 'On-premise vs Off-premise utilization mix' etc.
Now is the time to rethink datacenters to meet tomorrow's needs. We need to fly, yet remain grounded. It's an interesting inflexion point and the good news is that these changes are bringing CIOs and Datacenter Architects back to the forefront to lead their organizations into the Digital Era.