Stopping Skynet

But would Skynet really be that evil?

Graphic by Keegan Steele

The development of artificial intelligence technology has come into the spotlight in popular science as the result of a letter recently drafted by the Future of Life Institute (FLI), a volunteer organization that aims to mitigate technological risks posed to humanity.

The letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter,” urges scientists to direct research in artificial intelligence such that it develops to benefit mankind.

This letter is accompanied by a research priorities document describing directions research should take in various fields relating to artificial intelligence (AI), emphasizing that AI research is worthwhile because of its potential to benefit mankind.

The letter itself is not ominous in tone, and states that AI may help alleviate some of humanity’s greatest problems, such as poverty and disease.

However, some prominent figures in the scientific community have voiced less optimistic opinions on the future of artificial intelligence technology.

Famous physicist Stephen Hawking has previously made statements about the potential dangers of AI. In an interview with the BBC, Hawking is reported to have said, “It would take off on its own, and redesign itself at an ever increasing rate [ . . . ] Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Hawking is one of the many scientists who have signed FLI’s open letter.

Elon Musk, the CEO of SpaceX, another famous signatory of the FLI’s letter, has made similar statements about the perils of AI. On Twitter, he has argued that humanity needs to be careful and that the technology could potentially be more dangerous than nuclear weapons.

Musk has shown further support of the FLI’s efforts by donating $10 million to the institute.

However, not everyone shares quite the same outlook with regards to humanity’s future relationship with artificial intelligence.

“Culturally, the West has a Frankenstein complex, where any new technology is seen as a threat. Hence, the terminator is the first image to come to mind for many in the West, whereas in Asia people think about Astro Boy helping people,” University of Manitoba computer science professor Jacky Baltes says.

“Deep down, I feel that empathy and emotion are fundamental to AI and that these super smart systems will need those as well to be super smart.”

Computer scientist and futurist Ray Kurzweil also holds a more positive outlook on the future of artificial intelligence. In a response to the drafting of the letter, Kurzweil calls technology a double-edged sword, and uses biotechnologies, which can be used in bioterrorism but have largely been beneficial, as an example.

In an article published in Time magazine, Kurzweil writes, “Consider biotechnology, which is perhaps a couple of decades ahead of AI. A meeting called the Asilomar Conference on Recombinant DNA was organized in 1975 to assess its potential dangers and devise a strategy to keep the field safe.”

In his book, the Singularity is Near, Kurzweil predicts that the singularity, the point in time when machines transcend human intelligence, occurs in the year 2045. Instead of surpassing human capabilities and then eradicating humankind, Kurzweil believes technology will become increasingly integrated with human bodies to enhance our capabilities and prevent human diseases.

Microsoft co-founder Paul Allen does not believe the singularity is near and has stated that Kurzweil fails to takes into account the sophistication of software needed to reach the singularity. A complete understanding of human cognition would first be needed to develop AI capable of the same decision-making and behaviours as the human brain. While computer hardware has become more computationally powerful, it is not possible to predict that software can accelerate at the same pace to reach the singularity by 2045.

In an article in MIT Technology Review, Allen writes, “This prior need to understand the basic science of cognition is where the ‘singularity is near’ arguments fail to persuade us.”

Baltes is also skeptical about the imminence of the singularity. “From a practical point of view, the point where we have uncontrollable super smart AI is at least 50 years away.”

Baltes also comments, “I don’t want to imply that there are any issues or no real danger to human life and well-being if robots and AI deliberately or accidentally misbehave. But that is true for any sophisticated technology. These are very real hard legal and cultural problems that we as a society need to address. However, they are a far cry from the AI apocalypse.”

1 Trackbacks & Pingbacks

  1. January 20, 2015 |

Comments are closed.