For a fleeting few months, I thought we were all doomed. I had just gotten back from a conference (Money 2020) in which there were a physicist, various owners of companies and many brilliant engineers. The first day of the conference I sat in awe, listening to what was happening with machine learning. How we were creating artificial body parts from plastic, that Moore’s law was indeed slowing down and other ways that machine learning was going to change the world. Fascinated by this and many of the predictions, I went away wondering what would happen to people? Are we destined to be terminated? Will we morph into greater intellectual beings (doubtful, but could happen)?
After a few months, I’ve come to a different conclusion. If we are modeling artificial intelligence on Humans, even the brightest, there is still an inherent bar. Think of it in this way, if we took the recently deceased Stephen Hawkins brain and said go learn that, be that, isn’t the inherent AI limited to the boundaries in place by his capability? Say we ask it to learn all the scientist in the world, ever. You still have a limit by which only a human mind can break. Or am I wrong?
For years I along with others have been trying to figure out why we win or lose a deal. Most of us don’t have the statistical chops, or the data to determine exactly why. I still firmly believe that with the proper questions to enough sample size, this can be narrowed to why one wins or loses. I don’t think randomness can be accounted for, so in so much, you can figure for a loss given certain criteria, there is always a statistical chance you win.
What does this have to do with AI? I believe those who use AI to evaluate the results of sales organizations are destined for incorrect results if they model said decision process on human learning and capability. Rather this AI should be founded on Laws or truths by which we have a relative certainty are in fact correct.
This takes me to two examples I have seen using Watson for decision making processes. Yes, I know some of you will say that Watson is a flawed system. So is yours, get over yourself. In the two examples first, I asked Watson who was a better hockey player, Patrick Kane or Wayne Gretzky. In this case, Watson decided that Kane was better. The only knock, of course, was that the people running the conference had failed to load any meaningful data about Gretzky. Hence the sample set was too small.
The other time it failed was when, in a sales related effort, we asked Watson what customers we should target to sell Commission software to. The fundamental failure here, I believe, was the question. Ahh, but in both cases, shouldn’t AI know the inherent issues in the results that get spit back? How is it that Doug, can look at this list of potential customers to sell to and know that of the 458 it spit out, there was truly an opportunity at 34 of them. Why is it that Doug can fundamentally feel that there was a better comparison of Gretzky and Kane to be made, yet the AI didn’t say so?
Here in lies the flaw, I believe AI is a way of fast fail. If it is doomed to fail, then learn from failures, is that what we want? Do we want to experience those failures in circumstances where succeeding is the only option? I’m not discounting AI, but I can’t help but feel that if it is modeled on Humans alone it is destined for mediocre results. People are right to fear AI modeled with bad intentions as it will likely lead to fulfilling those intentions, albeit in an average way. The only problem is with enough at bats of failing there, it will get it right. Conversely, if it is destined to fail, let’s say in medical results, how many people will be hurt, etc. before things are gotten right.
And last, how many businesses will fail based on AI related decisions that were fundamentally crap. Are we in line for seeing a bunch of companies turn over before there is a sample size, with the right information, that moves the line for an entire industry? Then when that industry succeeds, who will be able to beat them?