The environment around us is evolving rapidly with the adoption of AI in various forms and fashion – from personal assistants to predicting events. Its penetration is bound to increase at a faster pace in our personal and professional lives through the solutions that are available today and those that are under development. There is no doubt that the solutions enabled by artificial intelligence are having a positive impact on individuals and enterprises, but the rapid speed of evolution and adoption puts forward an important question, which is, “are we subconsciously influencing AI with our own biases?”.
Biases and their effect on AI solutions
The Oxford Dictionary defines bias as:
Biases have been categorized into multiple types and the list is constantly evolving. Two prominent types of biases are confirmation bias, where a result or set of results are preferred because these confirm with a previous belief, and availability bias which focuses more on information available or applicable to an individual.
To elaborate how our biases might be influencing AI, let’s have a look at the key building blocks of any AI solution. These are:
All of the above three are developed, provided and controlled by humans. Even if the creators and users don’t intend to apply any biases consciously, there is always a possibility that some of the biases make way subconsciously and a special focus is required to ensure that the AI solutions are completely unbiased.
Let’s take an example for each of these building blocks and see how easily such biases can creep into AI, if due care is not taken.
Who should curtail biases?
With the above-mentioned possibility of influence and potential of biases established, the next big question is – “who is responsible for ensuring that such biases are not inherited by AI solutions?”. The answer is “all of us – from researchers, to developers, to enterprises, and users.”
We all are collectively responsible for ensuring that the solutions are as unbiased as possible and can be put to use for all the 7 billion+ people around the world. There are various de-biasing techniques that are currently available, and more are being explored. It is important to ensure that the AI solutions are validated for bias too, along with functional and technical tests, before putting these to use in real world applications.
Gaurav Agnihotri
Gaurav Agnihotri is Senior Practice Leader, Wipro HOLMESTM AI & Automation Ecosystem. In his current role, he heads multiple strategic programs that have a direct impact on positioning, marketing, go-to-market initiatives, top-line growth, product roadmap and use cases. He brings to table nearly a decade of diverse and rich experience in the areas of consulting, business development, strategy, marketing and partnerships, across industries, geographies, and technologies. Gaurav holds an MBA from the University of Georgia, USA, and Bachelors in Electronics Engineering from NIT Surat, India.
© 2021 Wipro Limited |
|
© 2021 Wipro Limited |
Pharmaceutical & Life Sciences