The environment around us is evolving rapidly with the adoption of AI in various forms and fashion – from personal assistants to predicting events. Its penetration is bound to increase at a faster pace in our personal and professional lives through the solutions that are available today and those that are under development. There is no doubt that the solutions enabled by artificial intelligence are having a positive impact on individuals and enterprises, but the rapid speed of evolution and adoption puts forward an important question, which is, “are we subconsciously influencing AI with our own biases?”.
Biases and their effect on AI solutions
The Oxford Dictionary defines bias as:
- “Inclination or prejudice for or against one person or group, especially in a way considered to be unfair.
- A concentration on or interest in one particular area or subject
- A systematic distortion of a statistical result due to a factor not allowed for in its derivation”
Biases have been categorized into multiple types and the list is constantly evolving. Two prominent types of biases are confirmation bias, where a result or set of results are preferred because these confirm with a previous belief, and availability bias which focuses more on information available or applicable to an individual.
To elaborate how our biases might be influencing AI, let’s have a look at the key building blocks of any AI solution. These are:
- Algorithms: Developed, leveraged and applied to solve a specific problem or set of problems around a use case
- Data: Used to build and train solutions
- Experience: Iterations and/ or interactions that enable solutions to learn and adapt
All of the above three are developed, provided and controlled by humans. Even if the creators and users don’t intend to apply any biases consciously, there is always a possibility that some of the biases make way subconsciously and a special focus is required to ensure that the AI solutions are completely unbiased.
Let’s take an example for each of these building blocks and see how easily such biases can creep into AI, if due care is not taken.
- Algorithms – These are designed for a specific use and applied with a particular outcome in mind. It is important to ensure that the defined goal or outcome does not give way to any biases. As an example, let’s consider AI being used in the classification and routing solution to increase the number of calls answered by agents in a customer service center. If this is the only desired outcome defined and measurement criteria set for the AI solution, it will be more inclined to put through calls from customers looking for easy to provide solutions and increase the wait time for those who are seeking help for a complex and time-consuming issue, even if such customers are more valuable (with high LTV). This negatively impacts overall customer service experience which may result in high customer churn. Consider a similar scenario for emergency services response and the implications will be far more alarming.
- Data – Data is a critical building block for AI that is used to build and train the solutions. In fact, this is the universe of knowledge which the AI solutions possess before being put to use. Hence, it is vital that the data used is completely unbiased and covers all related aspects of a use case. Let us take the example of an AI solution for law enforcement that uses face recognition which if trained primarily with data from a certain ethnic group may start developing a bias against that ethnicity and suggest a lot of false positives. Such scenarios, if present, in the corporate world will have impact on processes around hiring, compensation, variable pricing for customers, determining insurance premiums etc. Historical data with implicit bias will result in a biased AI solution.
- Experiences – The AI solutions once built, learn and adapt based on usage. This is another avenue through which systems can inherit biases. Individual users will have their own preferences and they may want the supporting AI systems to align with their thought processes. It is important that the solution differentiates between a user’s preference and something that requires course correction in the general functioning of the solution. An example of such a scenario would be around loan processing where the loan approver’s personal bias against certain demographic factors should not negatively influence the AI-enabled solution’s functioning, which is made available for assistance, verification and processing.
Who should curtail biases?
With the above-mentioned possibility of influence and potential of biases established, the next big question is – “who is responsible for ensuring that such biases are not inherited by AI solutions?”. The answer is “all of us – from researchers, to developers, to enterprises, and users.”
We all are collectively responsible for ensuring that the solutions are as unbiased as possible and can be put to use for all the 7 billion+ people around the world. There are various de-biasing techniques that are currently available, and more are being explored. It is important to ensure that the AI solutions are validated for bias too, along with functional and technical tests, before putting these to use in real world applications.