The 12 strategic technology trends for the coming year per Gartner shouldn’t surprise anyone; they focus on automation, security and AI. The technicalities go deeper, of course: hyperautomation and autonomic systems, increased security through mesh and privacy-enhanced computation, generative AI, decision intelligence and AI engineering. But the larger issues are generally familiar. One trend is particularly familiar – the use of data fabric – because of its recurrence on Gartner’s list. Data fabric’s return is both notable and important; it underscores the need for a unified view across the democratized and ever-growing landscape of technology offerings in data storage and processing.
The technology-trend predictions reflect companies’ demand for process optimization and better use of available technology. The goal, per Jayant Prabhu, is that 80% of business leaders want to move to an intelligent enterprise, which as he notes, is more than merely technology. It’s the people who are the differentiator – and having the ability to use a company’s data assets is key. Building an intelligent enterprise therefore means “…investing in a journey that pays as much attention to cultural and process changes and it does deploying new tools.”
Advances in AI engineering, hyperautomation, and total experience are just a few of the tools available, with the Data Fabric acting as that layer of accessibility. With the wealth of data technologies available for storage and processing, upskilling the workforce across each technology to analyse output means a large upfront cost, not to mention time. These inherently mean the organization will struggle to get executive buy-in for key initiatives. But by creating a single layer of accessibility through data fabric, companies can accelerate ROI – and therefore buy-in — for cloud initiatives and other strategic ventures.
So, what is the Data Fabric?
The data fabric is the overarching data-management solution across data uses, from centralization of data assets to a single point of access for analytics and AI. The fabric is more than just a data layer but is an insights portal with numerous uses. By enhancing the data fabric to become a “shopping cart” of data and analytics assets, enterprises step into the world of the data marketplace, one in which data objects can be bought and sold figuratively or literally.
The concept of a data fabric was coined by NetApp in 2016, where they defined it based on five core principles: control, choice, integration, access and consistency.
- Control and govern your data, agnostic of location, whether in the cloud or on-premises.
- Enable the choice of cloud, on-premises, applications, storage systems, delivery or consumption methods, with the option to change should an organisation need to.
- Integrate each component surfaced through the fabric, while returning the value from each individual component.
- Create that centralized layer of access for those individual components, for users, as and when needed.
- And create a centralized management capability to facilitate consistency across data assets.
In short, data fabric is a single layer that enables data users to collaborate and share information and value across any number of platforms, cloud or on-premises. Considering its clear value, why is it still a trend going into 2022? The industry has been talking about this for some time now; why hasn’t it become table stakes?
Data Fabric Enables Companies to Overcome Technical Debt
As companies embrace the cloud for revenue and growth, there’s a clear need to reduce technical debt (per Gartner). It would appear the peaking of data fabric is actually the result of technical debt accumulated over years of increased technology adoption, rather than the reduction of that debt.
Enterprise data transformations today include a multitude of technologies – and an equally dizzying number of skills needed to support those technologies. No longer are we considering the use of relational stores like MS SQL Server, Oracle, or PostgreSQL; but now we have NoSQL options like MongoDB, Cloudera, Neo4J, MarkLogic, and others. On top of this, we are adding cloud options to the stack, with AWS Redshift, GCP BigQuery, MS Synapse, Snowflake and more. We are comparing and adopting numerous solutions of awesome capabilities and thus becoming dependant on their features. Yet they can’t move everything at once to achieve value, let alone manage all those disparate resources.
Data fabric brings a single management layer to help businesses take advantage of all the value these technologies bring. This not only allows domains to continue with their own technology roadmap, but it also enables end users to take advantage of what other domains have put in place. This concept then allows for a multi-layered management and governance landscape by advancing Data Mesh capabilities across an organization, where governance and transformation are pushed back and democratized at the domain level.
Bring on the Future
Centralizing the data fabric allows for democratization of domain-level processes and data. Monolithic can’t be the answer anymore. As Gartner notes, large-scale lift and shift is on a downward trend, and cloud-native services are on the rise for 2022. However, cloud-native service doesn’t simply remove the hurdles of an organization’s past technology adoption. Having a single point of insights and accessibility has become fundamental for organizations to achieve value from their data assets and lay a foundation for their cloud-everywhere futussre. Enter the data fabric.