Elon Musk: ‘we are summoning the demon’

At an MIT symposium, Elon Musk, CEO of Tesla and SpaceX, stated that he believes artificial intelligence is “potentially more dangerous than nukes.”

“With artificial intelligence, we are summoning the demon,” Musk said last week at the MIT Aeronautics and Astronautics Department’s 2014 Centennial Symposium. “You know all those stories where there’s the guy with the pentagram and the holy water and he’s like… yeah, he’s sure he can control the demon, [but] it doesn’t work out.”

He also suggested that perhaps more government regulation might help us from unleashing hell on earth.

“If I were to guess at what our biggest existential threat is, it’s probably that [AI]. I’m increasingly inclined to think there should be some regulatory oversight, maybe at the national and international level just to make sure that we don’t do something very foolish.”

Perhaps we should get the church involved too. Although when it comes to existential threats it might be better to bring in the philosophers.

The idea that we will become the victims of our own inventions goes back thousands of years. The idea of a golem first appeared in the bible and stories of golems also appear in the Talmud. According to Wikipedia the word golem ‘is often used as a metaphor for a brainless lunk or entity who serves man under controlled conditions but is hostile to him under others.’

AIs could definitely fall into that category I suppose.

Sure there are potentially negative consequences of creating a rogue AI, although I think it would probably be more along the lines of a program accidentally erasing a few thousand emails or getting stuck in a fugue state and locking up a network rather than a Terminator-like Skynet that launches war on human kind.

Musk seems to be balancing somewhere near the edge of having a Frankenstein complex – if we attempt to create life through the use of technology we risk incurring the wrath of one god or another. The subtitle of Frankenstein, The Modern Prometheus, also has golem-like meaning in both Greek and Latin mythology. In the Greek versions Prometheus was a Titan who created mankind at the behest of Zeus. He made a being in the image of the gods that could have a spirit breathed into it. In the Latin version Prometheus makes man from clay and water, and because this goes against the laws of nature he is punished by his own creation.

Of course we already have Asimov’s Three Laws of Robotics and he later added a fourth, or zeroth law, to precede the others:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Not that those few words written by a science fiction author back in 1942 are going to save us from an AI gone berserk.

Personally I welcome AI research and think that ultimately it will aid humanity. However AI can be a tricky thing sometimes and sometimes the results can be unexpected.

Back in the early 80’s I was working on a game for the Radio Shack Color Computer where players would have to negotiate through a series of rooms filled with random walls and other objects while being chased by a computer-controlled ‘monster’. The ‘chase’ algorithm was simplicity itself. ‘Just keep moving closer to the player until you catch him or he escapes’. Unfortunately I quickly discovered that the ‘monster’ tended to get stuck behind things and it was easy for players to simply hide behind the randomly placed walls and objects.

So I wrote a very rudimentary AI routine that basically said ‘if something doesn’t work after a few attempts then try moving in some random direction for a random number of moves.’

The first time I ran the updated program it just happened to generate a room that had a ‘wall’ running five or six blocks in a row. As the monster approached I moved my player behind the wall to see how it would handle the problem. The program moved the monster to the left so I moved to the right. It then moved to the right so I moved to the left. We went back and forth a few more times and then it surprised me by ignoring what I did and it continued to move along the wall in one direction until it found the end and could go around to catch me.

I never programmed it how to solve a running wall puzzle. I just told it to try something new when the old chase algorithm didn’t work. My ‘try something new’ algorithm accidentally gave my program a new tool for problem solving that I never imagined and even though I wrote the code there was no way that I could predict what the program would do in any given situation.

So perhaps we can never predict what a true AI might do.

Siri, do you want to kill me?

Web