September | 2018
So, it's Monday, your weekend has been brilliant, you walk into the office, have your first cup of coffee and then your calendar goes bing! You see a meeting invite that goes "XYZ Vendor - Updates on AI Integration". You sigh - this is going to be one more presentation with a tedious video of two people trying to get Alexa (or Cortana) to do their bidding, and while everything seems stunningly smooth in the video, you can see the shadow of the marketing person in the video crouching in the background behind Alexa (just out of frame), reciting all the responses in an Alexa accented voice. Don't believe me? Here's an article that will convince you otherwise.
But, that isn't your biggest problem. Your biggest problem is you get out of the meeting and senior management walks up to you and says, "I think we need to integrate into Alexa and by the way why don't we make it a self-learning system so we can then get rid of all our support staff?" Now you start to sweat. See, one of the problems with the technology world today is that AI or Artificial Intelligence is the magic silver bullet that everybody hopes will solve all their problems and raise stockholder value, without understanding what it is capable of or how much work is needed. And believe me, while some of the work in AI is very impressive, developing an AI enabled anything is hard. And while the slick (and not so slick) marketing videos present a picture of general-purpose AI being readily available, it isn't. So, the first thing you need to do is to help management think about AI the right way.
Let me tell you a story about a clever horse. This horse could actually add, subtract, multiply and do differential equations. Think about that. The world's first artificial intelligence was actually an animal! But there was something strange about this horse. Whenever the horse couldn't see the questioner, or whenever the questioner did not actually know the answer, the horse invariably got the answer wrong. After a lot of testing, and hopefully a lot of carrots for Hans (for Hans was the name of the horse), the examiner worked out that the horse wasn't answering the question. What the horse was doing was something a little more subtle. He was reading the body language of the human before him. Hans, the clever horse, wasn't really answering the question, but was reading human body language and using those cues to come to a conclusion about what the right answer was. This is a feat of social communication that AI today is completely incapable of. So how does this tie back to the right way of thinking about AI?
Really. And not even an intelligent one at that. And just like a horse, it can't explain what it does. This property is called inexplicability and is something that most vendors don't want to spend too much time talking about. To be fair, the entire industry has also willfully or otherwise chosen to ignore this very important topic. So what your senior management needs to understand about AI is this:
So, what's a good policy to govern AI projects in the enterprise? In the next post we will detail a few best practices that we have learned through trial and error.
© 2021 Wipro Limited |
|
© 2021 Wipro Limited |
Digital Operations and Platforms
Engineering, Construction & Operations
Pharmaceutical & Life Sciences