Robots won’t destroy the human race: that’s kind of our thing

Terminator

Last year, celebrated astrophysicist Stephen Hawking told the BBC that artificial intelligence could bring about the end of the human race:

“It would take off on its own, and re-design itself at an ever increasing rate… Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.

The development of full artificial intelligence could spell the end of the human race.”

The idea of machines taking over has been a staple of science fiction for generations, but could today’s technology really contain the seeds of our own destruction?

Such fears were stoked when it was recently reported that a computer had finally passed the Turing Test last year. Alan Turing, the father of modern AI, originally posited the test in his 1950 paper, Computing Machinery and Intelligence. Turing realized the difficulties involved in answering the question of whether machines can truly think owing to our own incomplete notions of how thinking can be defined. He therefore suggested an alternative, utilitarian approach to the answering the question: an imitation game. If a human interrogator cannot determine if the entity with whom he is communicating is another human or a machine, then we may for all intents say the machine is thinking.

Almost from the moment it was published, Turing’s hypothesis was assailed: Who would make the determination? A single person? A crowd? And what time frame should be used before the judge or judges must render judgment? Indeed, some biographers have suggested that Turing merely posited the imitation game as a thought experiment, never meaning for it to be used as a serious determinant of whether a machine can actually think.

This summer, judges convened at London’s Royal Society to participate in a Turing Test involving an electronic correspondent named Eugene Goostman– a correspondent who might be either a computer or another human being. In the end, 33 percent of the judges were fooled into believing Eugene was a real person, and the University of Reading’s Kevin Warwick announced the results:

“We are proud to declare that Alan Turing’s Test was passed for the first time on Saturday. In the field of Artificial Intelligence there is no more iconic and controversial milestone than the Turing Test, when a computer convinces a sufficient number of interrogators into believing that it is not a machine but rather is a human.”

A sweeping statement, to be sure. But a closer examination reveals that the results do not quite hold up.

For starters, the judges were told that Eugene was a 13 year-old boy from St. Petersburg. This foreknowledge meant they would attribute any choppiness in his English and lack of familiarity with Western cultural references to his age and foreign nationality. But isn’t this weighing the scale just a bit? Even if one believes that Turing meant his thought experiment to be taken seriously, this handicapping seems to violate the spirit of a fair Turing Test. In addition, the idea that 30 percent of judges must be fooled for at least five minutes is a recent invention that finds very little backing in Turing’s work. Indeed, the results were unconvincing to Imperial College’s Professor Murray Shanahan:

“Of course the Turing Test hasn’t been passed. I think its a great shame it has been reported that way, because it reduces the worth of serious AI research. We are still a very long way from achieving human-level AI, and it trivialises Turing’s thought experiment (which is fraught with problems anyway) to suggest otherwise.”

Shanahan’s comments echo my own experience at university. While pursuing simultaneous degrees in computer science and philosophy, I witnessed the high regard in which artificial intelligence was held by philosophers. Professors and students in my Philosophy of Mind class routinely speculated about the rise of machine intelligence, and vigorous debates ensued about whether we were already witnessing the emergence of thinking machines around us. I remember walking from such seminars down to Loyola’s Computing Lab to build neural networks, games that would challenge (and often beat) human competitors, and advanced pattern recognition algorithms. At no point during these late night jam sessions did I or my fellow coders believe that we were doing anything other than constructing elaborate iterative scripts: if this, then that. Was that enough to constitute true intelligence? We certainly didn’t think so, and were amused to see just how seriously work like ours was taken by the philosophy set: if they only knew the tricks we used to make our programs look like real thinking.

Today, an entire class of celebrity scientists like Raymond Kurzweil and Kevin Warwick base their careers on prognosticating the rise of truly intelligent machines, and issue dire warnings about how we must take heed before it’s too late. Given recent advances in areas like the amplification and coding of neural signals and environments where devices automatically respond to the presence of human activity, it’s tempting to believe we are on the verge of a true revolution which will lead to the emergence of autonomous thinking machines. But today’s “thinking” devices are still only simulating human thought by executing iterative code written by human authors: if this, then that. And those devices execute everything their human authors tell them to do– including the mistakes.

Consider Toyota. In 2013, a jury found the auto maker liable for the death of a vehicle occupant due to unintended acceleration. Forensic software witness Michael Barr showed that Toyota software developers had violated several protocols of sound coding, including declaring conflicting global variables. This poorly-written, overly-complex code caused Toyota vehicles to temporarily accelerate when drivers hit the brake. The jury found Toyota liable for $3 million. A year later, the troubled auto maker was forced to pay a whopping $1.2 billion for lying to Congress and the public about the causes of sudden acceleration and for disavowing responsibility for the multiple deaths it caused.

To the jury and the Department of Justice, there was no question who was at fault: Toyota. Specifically, the human agency which created the code that killed. When adjudicating matters of life and death, we correctly assign responsibility to the human authors behind the code because we understand that code is really just a set of instructions, like a recipe for chocolate cake. If the cake gets botched, you don’t blame the ingredients, you blame the chef. The contrary view that code is capable of taking on an independent life of its own and can therefore be assigned responsibility is risible for good reason.

The real threat to humanity lies not in the rise of the robots, but in our own innate tendency to use new scientific insights to violate the well-being of others. Automated flight systems mutate into extra-judicial drone strikes. Life-saving drug therapies lead to a host of maladies (and their cures) invented by big pharma to feed the beast. Vast communication systems give rise to a surveillance society in which every motion and message is tracked by agencies exempt from constitutional limits.

These fears about technology actually disguise the fears we have about how such discoveries will by used– by us. Besides being a great film, The Terminator struck a nerve because it tapped into our growing unease about the rise of computer technology in the 1980’s and how it was displacing traditional, often manufacturing-based economies. The Matrix was also a fun film, but it may have resonated with us because the elaborately constructed (and wholly artificial) world swirling around its protagonist echoed the complexity many people were starting to feel about the extent to which data was being used to control our lives– especially vast, impersonal financial markets and intrusive computer-based marketing.

Fears about technology really mask the fears we have about the uses to which we humans will put it.

 So don’t worry about the robots just yet.


Originally published on element14.

  • Sagar Jethani

    Cool post. Totally agree– these things are just tools.