Fermi, Filters, and Funky Aftershave

“Where is everyone?”

No, that’s not me looking for people to harass into reading my blog. Those are the words of Enrico Fermi. In case you haven’t heard of him, Fermi was a really famous physicist who was always invited to all the right parties. Oh, he was also pretty much the father of the nuclear age. (I’m not sure what physicist parties are like. Perhaps Kevin can touch on this in his next blog?).

Fermi’s quote has to do with what’s known as the Fermi Paradox. Basically, given that there are tens of billions of planets in our galaxy alone, even if intelligent life is a relatively rare occurrence, we should see evidence of other civilizations all over the place: trashy alien reality television, weird sporting events, unidentifiable fast food wrappers, etc. But to date, nothing, nada, zilch, zippo; thus the paradox.

robot-2256814_1920
A scene from ‘Real Housebots of Andromeda 17’

Ok. I know you’re probably thinking: “But what does this have to do with AI?”

I’m getting there. The road to the truth is sometimes long and bendy.

There are a lot of theories about why we don’t see any other signs of life in the universe. It could be that galactic society took one look at us and then strung up the cosmic equivalent of that yellow caution tape around our solar system. Hopefully that’s not the case.

The explanation I’d prefer to focus on today is called the Filter theory. It says that much like the filter that prevents coffee grounds from ruining the creamy goodness of your triple latte mochaccino, there’s a filter that prevents civilizations from reaching a point where they could accomplish troublesome stuff that far far away astronomers could see through their telescopes and remark, “Hey, that’s weird.”coffee-2616923_1920

So if there is some point in the evolution of civilizations where they all tend to wipe themselves out, the question becomes: Are we Earthlings past that point already or are we approaching it?

 

“That’s all very interesting, but what does it have to do with Artificial…”

I’ll get there. Bendy, remember?

I watch a lot of the History Channel and in the early days of humankind people sought to harness the power of fire so it could be a useful tool versus something that terrorizes them. However, there were many among our ancient ancestors who were concerned that fire was too dangerous and the tribal elders reminded everyone of the consequences of the pointy stick fiasco (I’m paraphrasing).

“Yes, but I still don’t see…”  Bennnn-deeeeee.

And continuing forward through history, every new technological advance has brought with it concerns of filtering ourselves out of becoming the first galactic civilization. So, even though the concern has been there from fire, to gunpowder, to cloning the DNA of dinosaurs as part of a new amusement park venture (that last example may have been from a different channel), I don’t think any of these compare to what is probably the actual filter that all civilizations have to overcome.

Yes, now I’m talking about AI.

“Finally!”

Even though we’re probably many years away from a super-intelligent AI, imagine for a minute an AI with the same intelligence as a person. That would still be a pretty big deal because digital circuits operate about a million times faster than organic ones (in other words, us). So pretend you could put this human-level AI on a problem for a week. That would be the same as a person working on the problem for about 20,000 years, or 20 people for 1,000 years, or 50 people for…

“We get the idea.”

Now imagine that whoever gets this AI first has only a six-month head start on their competition. That’s the equivalent of half a million years in human time. What would another country do if they thought we had this capability or were even close to it? I’m not sure, but I don’t think I’ll be moving to Silicon Valley anytime soon.

This was the idea behind Elon Musk starting his OpenAI initiative. Given that he’s been very vocal about the dangers of AI in the past, at first it seems odd that Elon* would start up a company trying to develop an intelligent AI. (* I like to think Elon is the kind of person who would let me call him by his first name if we ever met; plus, “Mr. Musk” sounds like a really cheap aftershave and I’m not sure I could say it with a straight face.)

Elon’s idea was that by being the first to develop an intelligent AI, OpenAI would have enough of a head start to dictate the direction that AI research and development goes in order to make sure it’s used to benefit humankind. Given the history of filtering mentioned above, that plan kind of makes sense.

Hopefully, whoever gets there first—Elon or others—will have humanity’s best interests in mind and we can maybe be the first ones to make it past the filter point.

Then we’ll be the ones putting up the caution tape.

Well, that’s it for this episode. As always, if you have any questions about AI or Machine Learning, or need some physics party plans (I know a guy), drop me a line.

Thanks!

Under the Hood

Shhhhhhhhhh……..

Lean in a little closer…closer…whoa, not that close!

I’m speaking (writing) in hushed tones because in this special sort-of-but-not-really-post-post Halloween episode we’re discussing the dark side of AI.

Imagine this nightmare scenario: One day in the not too-far-off future, you’re on the way to work. It’s a beautiful day. The sun is shining, the birds are chirping, your self-driving car is doing all the work while you relax and sing along with your favorite rap-opera-country song. auto car

Suddenly and for no apparent reason, your car violently lurches to the left and drives straight into an artificial tree! Stunned, you emerge a few moments later. The car’s safety features have spared you any physical injury, but the horror slowly sets in that your pumpkin spice latte is gone forever. The EMT robots arrive and begin probing you with instruments you don’t understand as your head clears and you start to wonder what happened.

“Did my constant singing finally drive the car to suicide? Surely my voice isn’t that bad.”

It is, but that probably wasn’t the cause. A short while later a group of software engineers gathers around the crash site to search for a cause. The last strains of Carmen Wrecked My Pickup for Shizzle crackling through the damaged speakers. They mumble to each other in an incomprehensible jargon, reminiscent of an ancient forgotten tongue. After a few moments of silence, some head scratching, more silence, they give each other a knowing glance.

In the morning, they’ll blame it on the hardware.

And now many of you are thinking, “That’s just silly. My mechanic, Bunky, can plug a thing into the car now and find out what’s wrong with it.” Yes, that’s true today, but, on a side note, why are all good mechanics only known by their nickname? Bunky, Cooter, Skeeter…it’s like they’re part of some secret society or something. Or maybe their nicknames hide their real identities so you don’t call them at 3 a.m. when you can’t sleep because your car made a noise on the way home and it’s probably nothing but you’re thinking the car may be possessed by the spirit of Mikhail Baryshnikov because you saw this show on the Discovery channel where…

Sorry, getting off track there. Long story short: I recently had to switch mechanics.

Anyway, cars today are relatively simple compared to the fully autonomous vehicles of the near future. And it’s that complexity that will make it practically impossible to figure out what finally drove your car over the edge. As problems become more complex, rather than programming computers to solve the problem directly, we have to teach them.  I know that sounds silly, but board games may help illustrate what I mean.

When Deep Blue from IBM beat the world chess champion in 1996, it was able to do so basically using brute force. There are only about 220 possible board configurations in chess, and the computer could check all of them in order to determine the best move. It was vastly different last year when AlphaGo, a computer built by Google, beat the human GO champion. GO is an ancient Chinese board game where there are about 2.08 X 10170 possible board configurations. I tried to think of an example to help us get our heads around that number, but I’m afraid I’m just not that creative. Grains of sand on a beach? Not even close. Cat hairs on your favorite sweater? Nope. Number of times your weird uncle has embarrassed you during the holidays? Warmer, but still not there. If you had a million weird uncles and you were all there for every holiday in every country from the beginning of time? Still not enough.

You see my dilemma. It’s just a really, really huge number. So how then did a computer beat the human champion? The engineers let it learn by watching the moves from thousands of human matches. Then, after the AI had a rough grasp of the game, the engineers employed what’s called reinforcement learning. Basically, the computer played millions of games against itself until it was amazingly good.

So, you can begin to see the issue. Start with an incredibly complex problem. Create a very complex AI system, and then let the AI learn on its own how to solve the problem. The result is that we have no idea why the AI is making the decisions it is. And so we may never know what happened with the car in the scenario above. Maybe your Toyota became self-aware like Skynet from The Terminator. dark tech(I realize for most of you “from The Terminator” is superfluous there. It’s obviously not Skynet from Sense and Sensibility. Not saying it’s a bad movie, just a little lacking in the killer robot department. Maybe in the sequel.)

The scary part of all this is that advanced AIs are being implemented in more and more settings where it would seem important to know the why behind the decisions.  Such as: who gets what healthcare treatment, who is admitted to which school, who should be paroled, and, of course, which TV shows should be renewed for another season?  Also, these systems are often trained using existing data sets.  If the data is already tainted with human biases, we’ll need some way to make certain those biases aren’t being carried over into the AI decisions.  Unfortunately as AI systems become more advanced, there will be less insight into what’s going on under the hood. That’s why you may hear them referred to as “black boxes”, we can’t see inside.

All is not lost, however; at least not yet. Very smart people at Google, Microsoft, and other places are working on the problem as part of the larger issue of AI ethics. (This is a good article about the current state of AI ethics if you’re interested.)

Well, that’s it for this episode; I hope it wasn’t too frightening. Remember, if you can’t sleep tonight you can always call your mechanic (if you know their real name). As always, if you have any questions about AI or Machine Learning, or need some movie ideas, drop me a line.

Thanks!

 

Disclaimer: This Blog is for educational purposes only as well as to provide general information and a general understanding of the topics discussed.  The Blog should not be used as a substitute for legal advice and you are advised to seek additional information from your insurance carriers, Medicare and/or Medicaid agencies for additional criteria and regulations regarding these services.

Bananas and Peanut Butter

Apparently, there was an overwhelming response to my first blog so I get to write another one. Thanks again to both of you who read it.

Given all the doom and gloom about AI in the media recently, in this episode I’d like to talk (write) about some of the positive things going on with a branch of AI known as “deep learning”. But first, a little history. Way back in ancient times when computers were just getting started in the ‘60s, while some scientists worked to perfect the lava lamp, others invented something called a neural network. Neural networks were loosely modeled after how they thought the brain worked back then, with an input and output and in between hidden layers of artificial neurons.

neural-network
Like a brain without the squishy parts.

Neural networks were really good at solving a lot of problems, but unfortunately were also similar to grilled cheese sandwiches (stay with me here). As problems became more complex, the neural networks required more hidden layers to find a solution. However, after more than about two layers, they wouldn’t solve the problem and generally led to a mess that someone had to clean up instead of the cheesy goodness they were hoping for.

This was pretty much the situation until 2009 when Geoffrey Hinton and his team at the University of Toronto figured out that by tweaking the layers ahead of time, they could overcome this problem and voila!, peanut butter and banana tall stack.

pancakes
Hungry yet?

These systems with many layers were called deep neural networks and could be used to solve much more complex problems and without any cheese being scraped off the ceiling. Fast forward to today and deep learning is being implemented everywhere, including the voice and image recognition done by Apple, Google, and Amazon. It’s also the foundation of self-driving cars.

 

Now rewind back to last year.

I met Andrew Beck at a conference in DC. Andrew has an MD from Brown and a PhD from Stanford. He has started three successful companies and is also an associate professor at Harvard (you know, the kind of person you secretly hope has some awful dark secret like a third foot or something). Dr. Beck also appeared to be a genuinely nice guy (unfortunately) as he presented some research he had done on pathology. The first set of research presented was done to determine which factors were most important in getting accurate pathology results.

Care to take a guess? Anyone? Type of equipment? Experience of the pathologist? Day of the week?

Who said “day of the week”? Ding! Ding! Ding! You win! You win, that is, unless your sample hits the pathologist’s desk first thing Monday morning, then not so much. It turns out, pathology labs are really busy on Mondays and Tuesdays where a pathologist may have upwards of a thousand slides to review in one day. By Wednesday, however, things have slowed down to the point that you can get a more reliable analysis.

For those not sure how important this is, it’s really, really important. The pathology results can determine the next steps in care ranging from no treatment at all to aggressive cancer treatment. I’m not authorized to dispense medical advice, but as your friend, if you need any tests done, I suggest they occur after taco Tuesday.

So back to Andrew Beck and team (oh, and before I forget: he only had the two feet, as far as I could tell anyway). They went to work on the problem with a deep learning AI. After fine-tuning the system, they compared it against an actual pathologist using samples with known results. The AI had an error rate of 7.5%. The pathologist, taking his time, achieved an error rate of 3.5%.

So good news: people are still better at some stuff, right? Sort of, but the better news was that working together, the AI and pathologist reduced the error rate to 0.5%. It became apparent that in pathology, people and computers make different types of mistakes, and by combining them, the accuracy of both can be improved. The whole process could also be sped up with the AI identifying features on the slide that the pathologist should focus on versus taking the time to review the entire sample.

Now fast forward to a few weeks ago.

I can’t be more specific because I don’t know when you’re reading this. If it’s 2275, then it’s obviously been more than a few weeks, and I’m more concerned that my words are the ones that posterity has passed on to a future society and suspect that your utopia is consequently in serious danger of collapse. Anyway, I had never heard of Taryn Southern until a few weeks ago (in 2017). According to my daughter, who is an expert in such things, Taryn is a YouTube personality who was on American Idol and is now also a pop singer. Taryn’s new album (or whatever you call them now) was just released and the single “Break Free” is doing well.

I’m sure you’re beginning to wonder how this is relevant. Trust me, it is. Taryn’s new album was the first one where the entire album was written and produced by an AI. Taryn found working with the AI in most cases to be preferable to working with human collaborators. She also said it sped up the creative process 20-fold.

So, yes, finally, we have the technology to streamline the creation of pop songs.

I’m running out of time, so, to sum up: from pathology to top 40, humans and AI working together are discovering new and better ways of doing things, helping us find new approaches to old problems.

So remember: the next time you hear that AI is going to destroy mankind, it might someday. But more optimistically, AI may also be responsible for getting you a more accurate diagnosis or producing that new hit song you can’t get out of your head.

If you have any questions about AI or machine learning, or need a good grilled cheese recipe, drop me a line.

Thanks!

QIPSPHAMLS

I was asked to write an introduction, so welcome everyone to my very first blog, ever.

I’m currently in the dissertation phase of my PhD in Computer Science so it’s been beaten into my head over the past few months that I really don’t know anything about anything (just kidding; it’s actually far worse than that). However, despite being completely unqualified, I’m going to write about my favorite topic, Machine Learning and Artificial Intelligence. Actually, favorite topics, I guess.

Rather than start with blah blah blah about blah blah and how important blah blah blah is, I thought I’d begin this episode with some Frequently Asked Questions (FAQs). The more astute among you are thinking, “How can there be FAQs if this is the first one? For that matter, how can there be any questions at all, frequent or otherwise?” Honestly, these are just Questions I’m Pretty Sure People Have About Machine Learning, but QIPSPHAMLs doesn’t roll off the tongue quite as easily as FAQs.qipsphamls

So anyway, let’s get started with some FAQs (QIPSPHAMLs).

  1. What is Machine Learning?

Machine Learning is the name given to a collection of algorithms used to “give computers the ability to learn without being explicitly programmed,” according to Arthur Samuel, who is generally regarded as the father of Machine Learning. The algorithms generally fall into two categories: supervised and unsupervised learning. Supervised learning typically requires large data sets to train the algorithm properly; with unsupervised, you just run the algorithm and hope nothing blows up.

Just kidding. That hardly ever happens.

Unsupervised techniques are used when there is no training data available and are good at putting unknown things into categories. Imagine you have a pile of weird, nonsensical phrases that you don’t know what to do with. Unsupervised learning algorithms could divide them into categories of legal terms and medical terms, even though no one understands either of them.

There exists a large collection of Machine Learning algorithms to do everything from sorting fruit to diagnosing disease to deciding what you’d like to buy next on Amazon. I’m not going to go through them all now because there’s a lot—and frankly, the names are awful, like convolutional neural network and self-organizing feature map. They really just serve as a reminder for why computer scientists shouldn’t be allowed to name things.

  1. Is a robot going to take my job?

Probably. The question you should’ve asked in this imaginary conversation is, “When?”

Pepper the job stealing robot
The future is here to take your job and his name is Pepper.

Well, it depends on what you do. If you’re a nurse or physical therapist you’ve most likely got 10 or 20 years. If you’re a truck driver you should probably head back to school next semester. More immediately, there’s a lot of good that can be done by combining human intuition and expertise with artificial intelligence (AI).

It’s a very exciting time for this field, and if I get to write another one of these, I’ll go into more detail in a future episode.

  1. I heard Elon Musk and Stephen Hawking both said that AI is a threat to mankind. Are computers about to take over?

Wow, great question. The short answer: Probably not.

Right now, programming languages are brittle. In the context of programming, that means the code is easily broken, which, as I think about it, is actually pretty similar to what brittle means in other contexts.

In other words, the army of killer robots would probably be done in by a missing semicolon or a Windows update.

There are some real threats from AI including issues with our current economic model due to the combination of AI and robotics taking over more of the job market. I plan to write a post that focuses on the actual dangers of AI in the future. The post will be in the future, the AI concerns are current.

  1. What’s the difference between AI and Machine Learning?

I knew you were going to ask that.

The terms are often used interchangeably, but there are some subtle differences.

AI is the overarching field of trying to make computers think like people…although there is some current research that says thinking like people may not be best thing for computers. Machine learning is a subset of AI that typically focuses on data, basically recognizing patterns such as speech or image recognition, categorization problems, checking if you cheated on your taxes, etc.

Pretty much everything we have done so far is considered narrow AI, meaning focused on a specific problem like playing a game or recommending friends on Facebook. General AI is the term for a computer that thinks like us. We’re not there yet, but it’s coming sooner than most people think. I also plan to write about that in an upcoming episode.

I was told to make this entry a 5- or 10-minute read, which is difficult because I have no idea how fast you read. I’m guessing this is close to enough for most of you so I’ll stop here.

If you have any questions about AI or Machine Learning, or just need help naming something, drop me a line.

Thanks!

Up ↑