The adage ‘Beauty lies in the eyes of the beholder’ holds true for an enterprise artificial intelligence and machine learning journey as success lies in the approach to its adoption. Even as the journey itself is interesting, it often makes enterprises go through various sprints of high and low confidence in its successful implementation.
When it comes to artificial Intelligence/machine learning (AI/ML) implementation, the journey matters more than the destination. The road taken defines the success of an enterprise’s AI/ML journey.
In today’s demanding world, where AI/ML adoption is no longer an option but a necessity to stay relevant and competitive, enterprises cannot afford any delay in embarking upon this journey. This paper aims to de-risk successful adoption of AI/ML initiatives for enterprises by drawing upon our learnings from the experience of delivering cross-industry AI/ML solutions and mapping them with key failure points.
Eight key conflict zones in AI/ML implementation and our recommendations
Most companies globally are aspiring to go digital and working toward it in their own ways. Data science, blockchain, AI/ML, conversational AI, cloud computing, and other emerging technologies are disrupting businesses everywhere. Technology service providers are showcasing the art of possible even as expectations associated with these technologies soar. All this often leads many organizations to believe that every single business opportunity is an AI/ML candidate. While it may be true to some extent, service providers/sales force/technology experts are often overtly excited to onboard every use case into the AI/ML delivery pipeline. It may seem like a win-win situation for both organizations and service providers to begin with, but often, the reality is grim when organizations realize they have entered into a conflict zone.
Every AI/ML journey starts with an expectation that is gets translated into an acceptance criteria internally. A clear understanding of the acceptance criteria plays a crucial role in making the AI/ML project successful. It is very important to know what business outcomes the AI/ML solution can deliver at each milestone of the journey and how the solution will get wider acceptance from different stakeholders in the organization.
Zone-1: When it is unclear who (Technology or Operations) is driving the transformation initiative at the customer’s end.
As a service provider, you may discover that the Technology group would like to see more of technology use cases whereas the Operations group would prefer to see more operational use cases.
If you have less clarity on who is driving this initiative, you tend to be more aligned to one of these groups who is throwing more challenges at you and the AI/ML technology. The Technology group may not be able to see or understand the real need for the solution if the benefit is aligned more towards operations and vice-versa. Sometimes, such a situation not only breeds conflict but also disappointment with the technology itself, depriving the project of a fair chance to survive.
Recommendation: AI/ML projects deliver best results when initiatives are co-led by both Technology and Operations. At best, the service provider should be clear on who is driving the business case and who the ultimate beneficiary is. Focus should be more on the outcome than the technological demonstration unless specifically asked for.
Zone-2: When the acceptance criteria is not well defined and lacks pragmatism.
In most AI/ML implementations, accuracy and acceptance criteria are thought to be the same and interchangeable terms, which should not be the obvious case, unless the use case is purely technology oriented. There could be multiple ways of defining the acceptance criteria considering end outcomes like productivity gain, yield improvement, manual effort saved, speed to deliver, handling scale, cycle-time reduction, and more.
Conflicts are bound to happen when acceptance criteria is solely based on accuracy. Let us take the example of a ‘document digitization’ use case to understand this point better. Customers expect everything to be straight through processing (STP) which may not be the obvious case. The commitment of 80% or 90% accuracy may lead to a confusion for everyone involved in the AI/ML journey. In this use case, accuracy can be defined at ‘attribute level’, ‘document level’ or at ‘process output level’. In most of the AI/ML projects, we consider it at ‘process output level’.
On the other hand, customers may think that 90% accuracy means 90% of the cases will be straight through processing. This is very hard to achieve and may lead to a major conflict. Most of the AL/ML projects are stuck at this stage.
Recommendation: Explain the AI/ML solution to the customer in terms of cost benefits and productivity gains. Avoid getting too much into statistical terminologies around accuracy as you cannot win the game here.
Zone-3: When the customer fails to understand the pre-requisite of sample dataset required for model training, volume of dataset and the associated labelling effort.
Often, customers are less aware of the dynamics behind the sample dataset and fail to factor this effort or most likely underestimate it. In such cases, even when the training datasets are received, you are not sure if they are correct and balanced, adequate for model training and whether they carry any bias/or are skewed.
This uncertainty invariably shows up when you reach the User Acceptance Test phase and the disconnect around test data becomes evident. However, by this stage, it is often too late and the solution is unlikely to fetch the required credit.
Recommendation: Educate the customer that ML models are only as good as the data fed into them.
Zone-4: When you overestimate the power of technology and make unrealistic commitments.
AI/ML models deliver predictions around events. Predictions are further represented in terms of confidence score. Higher the prediction accuracy, higher the confidence score.
It is unlikely that you leverage AI/ML models alone to build an automated solution. It generally involves other components like optical character recognition, robotic process automation, integration components and business rules. You may be proven wrong if you think that confidence score alone can define your solution’s capability.
AI/ML solution components work in a series. If one fails to deliver the right output, the other is bound to fail. For instance, if OCR is unable to extract a correct field value (example: numeric, alphanumeric, noun fields) ML models cannot guarantee that the prediction is correct. Models cannot do much when the input data itself is incorrect.
Recommendation: Involve a human-in-the-loop for the purpose of quick review and validation. Mechanism of confidence score works very well when your models are input-agnostic.
Zone-5: When the necessity of human-in-the-loop is not well understood, and the effort is not factored in.
Confusion arises when you bring in this piece of the puzzle, too late in the game. That is because customers often make the mistake of viewing AI/ML solutions as zero touch.
In reality, AI/ML implementations deliver substantial productivity gains with some human involvement. For instance, if you factor 20% manual effort for human-in-the-loop, you still gain 80% effort on the other hand. Nonetheless, it makes for a great business case.
Recommendation: Educate the customer that AI/ML technologies deliver the best outcomes when they work in sync with humans to augment decision-making, not so much as by replacing humans.
Zone-6: When you are dealing with unknown intentions.
We all know that AI/ML solutions deliver substantial productivity gains by reducing human effort on repetitive and mundane tasks. However, some human resources may not be appreciative of the solution as it would inevitably disrupt their work lives and bring change. At times, these people may bring in some difficult /outlier scenarios to prove things are not as great as promised.
Recommendation: Set clear boundaries with the company’s senior management and ask for support in handling the change management aspect across the organization.
Zone-7: When infrastructure requirement and readiness levels are underestimated.
Inaccurate assumptions around infrastructure requirements, approval timelines, associated cost, compliance, access permissions, security, architectural reviews, etc. can throw AI/ML projects off-track.
As a result, you may be forced to shrink your implementation plan and compromise on time, thus affecting the quality and impact of the AI/ML solution.
Recommendation: Factor-in the nuances associated with all these assumptions early on in the AI/ML journey.
Zone-8: When a company fails to balance the rewards and penalties associated with the outcome.
Unbalanced expectations could drive away the technology providers by highlighting the financial risks associated with the commercial models.
Recommendation: Educate customers, enable them to learn and understand the capabilities as well as limitations of AI/ML technologies and gain better clarity on the trade-offs.
Moving beyond hype to reality
The key to success in AI/ML journey is in setting the ‘right expectations’ beforehand. While there’s no doubt that enterprise AI will have a significant impact on business outcomes in future, taking the time to plan the journey, forecast deviations and establish workarounds early on will put organizations in a better position to reap the rewards.
As the adage goes – ‘the most liberating thing about beauty is realizing that you are the beholder. This empowers you to find beauty in places where others have not dared to look’. The AI/ML journey is no different.