Lean in a little closer…closer…whoa, not that close!
I’m speaking (writing) in hushed tones because in this special sort-of-but-not-really-post-post Halloween episode we’re discussing the dark side of AI.
Imagine this nightmare scenario: One day in the not too-far-off future, you’re on the way to work. It’s a beautiful day. The sun is shining, the birds are chirping, your self-driving car is doing all the work while you relax and sing along with your favorite rap-opera-country song.
Suddenly and for no apparent reason, your car violently lurches to the left and drives straight into an artificial tree! Stunned, you emerge a few moments later. The car’s safety features have spared you any physical injury, but the horror slowly sets in that your pumpkin spice latte is gone forever. The EMT robots arrive and begin probing you with instruments you don’t understand as your head clears and you start to wonder what happened.
“Did my constant singing finally drive the car to suicide? Surely my voice isn’t that bad.”
It is, but that probably wasn’t the cause. A short while later a group of software engineers gathers around the crash site to search for a cause. The last strains of Carmen Wrecked My Pickup for Shizzle crackling through the damaged speakers. They mumble to each other in an incomprehensible jargon, reminiscent of an ancient forgotten tongue. After a few moments of silence, some head scratching, more silence, they give each other a knowing glance.
In the morning, they’ll blame it on the hardware.
And now many of you are thinking, “That’s just silly. My mechanic, Bunky, can plug a thing into the car now and find out what’s wrong with it.” Yes, that’s true today, but, on a side note, why are all good mechanics only known by their nickname? Bunky, Cooter, Skeeter…it’s like they’re part of some secret society or something. Or maybe their nicknames hide their real identities so you don’t call them at 3 a.m. when you can’t sleep because your car made a noise on the way home and it’s probably nothing but you’re thinking the car may be possessed by the spirit of Mikhail Baryshnikov because you saw this show on the Discovery channel where…
Sorry, getting off track there. Long story short: I recently had to switch mechanics.
Anyway, cars today are relatively simple compared to the fully autonomous vehicles of the near future. And it’s that complexity that will make it practically impossible to figure out what finally drove your car over the edge. As problems become more complex, rather than programming computers to solve the problem directly, we have to teach them. I know that sounds silly, but board games may help illustrate what I mean.
When Deep Blue from IBM beat the world chess champion in 1996, it was able to do so basically using brute force. There are only about 220 possible board configurations in chess, and the computer could check all of them in order to determine the best move. It was vastly different last year when AlphaGo, a computer built by Google, beat the human GO champion. GO is an ancient Chinese board game where there are about 2.08 X 10170 possible board configurations. I tried to think of an example to help us get our heads around that number, but I’m afraid I’m just not that creative. Grains of sand on a beach? Not even close. Cat hairs on your favorite sweater? Nope. Number of times your weird uncle has embarrassed you during the holidays? Warmer, but still not there. If you had a million weird uncles and you were all there for every holiday in every country from the beginning of time? Still not enough.
You see my dilemma. It’s just a really, really huge number. So how then did a computer beat the human champion? The engineers let it learn by watching the moves from thousands of human matches. Then, after the AI had a rough grasp of the game, the engineers employed what’s called reinforcement learning. Basically, the computer played millions of games against itself until it was amazingly good.
So, you can begin to see the issue. Start with an incredibly complex problem. Create a very complex AI system, and then let the AI learn on its own how to solve the problem. The result is that we have no idea why the AI is making the decisions it is. And so we may never know what happened with the car in the scenario above. Maybe your Toyota became self-aware like Skynet from The Terminator. (I realize for most of you “from The Terminator” is superfluous there. It’s obviously not Skynet from Sense and Sensibility. Not saying it’s a bad movie, just a little lacking in the killer robot department. Maybe in the sequel.)
The scary part of all this is that advanced AIs are being implemented in more and more settings where it would seem important to know the why behind the decisions. Such as: who gets what healthcare treatment, who is admitted to which school, who should be paroled, and, of course, which TV shows should be renewed for another season? Also, these systems are often trained using existing data sets. If the data is already tainted with human biases, we’ll need some way to make certain those biases aren’t being carried over into the AI decisions. Unfortunately as AI systems become more advanced, there will be less insight into what’s going on under the hood. That’s why you may hear them referred to as “black boxes”, we can’t see inside.
All is not lost, however; at least not yet. Very smart people at Google, Microsoft, and other places are working on the problem as part of the larger issue of AI ethics. (This is a good article about the current state of AI ethics if you’re interested.)
Well, that’s it for this episode; I hope it wasn’t too frightening. Remember, if you can’t sleep tonight you can always call your mechanic (if you know their real name). As always, if you have any questions about AI or Machine Learning, or need some movie ideas, drop me a line.