AI is becoming ubiquitous in our everyday life – showing recommendations for on-demand/non-linear media platforms, product-bundle suggestions on e-commerce or catalog-retailer websites, social media advertisements, applications for jobs, universities and bank loans, and more. AI is either independently driving these decisions or enabling the process. This gradual proliferation of AI in our daily lives and the significant impact it can make to change a consumer’s course have led to increased scrutiny by the public and regulatory bodies.
With this increased focus, a rogue AI/ML system could easily tarnish the brand image of an organization. Real-life occurrences of undesirable consequences like discrimination in recruiting or lending, autonomous car decisions that lead to accident liability, search engines showing results based around bias, etc., have come to light, adding concern to AI/ML stakeholders. The increased focus on AI is leading to new regulations and frameworks that will improve the ethical standards of AI models.
Increasing Concern Among Business Leaders
Regulators and policymakers across geographies are stepping in with regulatory frameworks and guidelines to ensure Ethical AI implementation. In the US, the Algorithmic Accountability Act (AAA) has been introduced to empower the Federal Trade Commission to conduct impact assessments of algorithmic “automated decision making” (including AI/ML). AAA aims to evaluate automated decision-making systems from different aspects including accuracy, fairness, bias, privacy, AI explainability and security. The European Union (EU) also released its draft AI regulations in April 2021. They employed a risk-based approach for AI regulations. The objective is to categorize AI/ML systems as ‘unacceptable’, ‘high’, or ‘low’ risk systems based on their intended use.
Rogue events and the upcoming regulations continue to instill fear in the minds of business leaders that are leveraging customer-facing AI/ML (CF-AI) systems or creating new models. This hesitation can be attributed to one or more of the following factors:
Uncertainty about model fairness
Is the model output discriminating against a particular sub-group of people? Building an AI/ML model is complex and not always easy to detect when bias has been introduced. Bias can lead to reputational damage or attract legal penalties.
Lack of customer trust in models
If customers are unhappy with the models and don’t trust the decisions reached, it can cause reputational harm to the company. Even if a human were to make the same decision as an algorithm, AI explainability is a concern.
Stability and robustness of the models
There are increasing concerns about privacy and stability when deploying new models. A model built today might not be stable in a couple of weeks due to the changing environment and volatility. And there is pressure on the business to protect client data. A model is another area where a customer’s personal identifying information could be exposed, which could then be exploited through cyber-attacks.
Lack of skilled labor
Leaders are concerned about using a technology that requires a dearth of subject matter experts. Do they have the talent to maintain the models? Are models being sent to production with the right best practices? While companies are now launching products that make AI and ML development easier, the scarcity of industry experts introduces risk.
These concerns are causing AI projects to get trapped in the AI/ML experimentation pit without ever realizing the intended ROI from these ventures.
Accuracy is Not Everything Businesses Should Care About
When creating a new model, in most cases, success/exit criteria for use cases primarily revolve around some baseline performance figures. These figures are obtained directly from an “as-is” state or the subjective judgment of business stakeholders. The solution design decisions rest with the data scientists who perform the exploratory data science work with a central focus on beating the initial baseline accuracy figures or improving the performance metrics.
As such, no or little attention is paid to ethical AI design considerations like minimizing unwanted bias in the system or a trade-off between AI explainability and performance. These considerations are an afterthought once a model has been identified for a use case.
Business leaders set up expectations that the performance of an ML model is the solo success criteria of a project. This, combined with non or selective adherence to MLOps best-practice and an absence of adequate AI regulations and governance controls in place, dictates the solution building and deployment workflow where adherence to ethical concerns in the design is most likely overlooked.
Realizing AI Strategy and Protecting High Risk and CF-AI Systems
Business leaders need to consider action plans to mitigate the risks associated with the operation of AI systems (especially for ‘high-risk’ and CF-AI systems). This will enable companies to bring their investments caught in AI experimentation pits back on track to realize the AI strategy. Enterprises should consider three things to be run in concert:
Upskilling employees with Ethical AI considerations
Businesses should focus on their people. Efforts should ensure that employees are aware of Ethical AI, the directives of AI regulations and the value of following MLOps best practices for the entire life cycle of an AI system. Specifically, employees working on models need to be trained on how to minimize its bias and make it explainable.
Infusing transparent processes and frameworks in model creation and deployment
Outline industry and market-specific frameworks that enable stakeholders to describe fairness for each use case. The framework should strike a balance between explainability, performance and minimizing bias while supporting the critical stages of design and development. This could include extending the Governance and Compliance (G&C) framework to oversee the use and design of AI-ML systems from inception to production and post-production, thereby mitigating all AI/ML-specific risks.
The framework, combined with AI regulations, should allow the provisioning of two broad pathways at the initial AI opportunity qualification stage: one for internal business-facing AI systems and the other for ‘high-risk’ and/or CF-AI systems. The pathway for ‘high-risk’ and/or CF-AI systems should have appropriate G&C measures for human oversight, data privacy, AI explainability, accountability, system robustness, stability and fairness.
Explore tooling to build high-quality AI/ML-systems
Tools can mitigate risks by detecting bias, verifying AI systems for robustness and stability, serving models into production, explaining the predictions produced by a model, and flagging data and model-performance drifts. Numerous tools are available including open-source algorithms like LIME and SHAP for explainability, platforms and services from the cloud hyperscalers or other market offerings from start-ups and scale-ups. Business leaders can provide the tools for their skilled workforce to vet their models and training data.
Companies must consider the aspects of maximizing the business value of their AI systems while also ensuring the data is fair and protected. Are companies prepared for the AI regulations that are soon to be enforced? What are the appropriate courses of action for infusing the above strategies into their AI modeling? The answers to these questions lie in how people, processes and tools should be utilized to help keep their models fair, ethical and trustworthy.