So, it's Monday, your weekend has been brilliant, you walk into the office, have your first cup of coffee and then your calendar goes bing! You see a meeting invite that goes "XYZ Vendor - Updates on AI Integration". You sigh - this is going to be one more presentation with a tedious video of two people trying to get Alexa (or Cortana) to do their bidding, and while everything seems stunningly smooth in the video, you can see the shadow of the marketing person in the video crouching in the background behind Alexa (just out of frame), reciting all the responses in an Alexa accented voice. Don't believe me? Here's an article that will convince you otherwise.
But, that isn't your biggest problem. Your biggest problem is you get out of the meeting and senior management walks up to you and says, "I think we need to integrate into Alexa and by the way why don't we make it a self-learning system so we can then get rid of all our support staff?" Now you start to sweat. See, one of the problems with the technology world today is that AI or Artificial Intelligence is the magic silver bullet that everybody hopes will solve all their problems and raise stockholder value, without understanding what it is capable of or how much work is needed. And believe me, while some of the work in AI is very impressive, developing an AI enabled anything is hard. And while the slick (and not so slick) marketing videos present a picture of general-purpose AI being readily available, it isn't. So, the first thing you need to do is to help management think about AI the right way.
A story about a horse
Let me tell you a story about a clever horse. This horse could actually add, subtract, multiply and do differential equations. Think about that. The world's first artificial intelligence was actually an animal! But there was something strange about this horse. Whenever the horse couldn't see the questioner, or whenever the questioner did not actually know the answer, the horse invariably got the answer wrong. After a lot of testing, and hopefully a lot of carrots for Hans (for Hans was the name of the horse), the examiner worked out that the horse wasn't answering the question. What the horse was doing was something a little more subtle. He was reading the body language of the human before him. Hans, the clever horse, wasn't really answering the question, but was reading human body language and using those cues to come to a conclusion about what the right answer was. This is a feat of social communication that AI today is completely incapable of. So how does this tie back to the right way of thinking about AI?
Your AI du jour is a horse
Really. And not even an intelligent one at that. And just like a horse, it can't explain what it does. This property is called inexplicability and is something that most vendors don't want to spend too much time talking about. To be fair, the entire industry has also willfully or otherwise chosen to ignore this very important topic. So what your senior management needs to understand about AI is this:
- It's like a horse: Train it hard enough and it will produce the right result for you, but...
- It needs training: And extensively, if you don't want it to be like clever Hans and produce the wrong result. And training needs data. A lot of it.
- It is inexplicable and ambiguous: You will never know if your AI is actually reading the math problem and solving it or reading some other latent variable that exists that you are unaware of. So, it will invariably get a great many more things wrong than right. Which means...
- You have to have a policy to handle this: And you need to define it before you start out on your latest Alexa integration, otherwise you will have an irate customer complaining that your AI is dumber than a horse.
So, what's a good policy to govern AI projects in the enterprise? In the next post we will detail a few best practices that we have learned through trial and error.