5 Very Smart People Who Think Artificial Intelligence Could Bring the Apocalypse
Dec02

5 Very Smart People Who Think Artificial Intelligence Could Bring the Apocalypse

Share Victor Luckerson   Dec. 2, 2014 Theoretical physicist Stephen Hawking poses for a picture ahead of a gala screening of the documentary ‘Hawking’, a film about the scientist’s life.AFP/Getty Images ‘The end of the human race’ On the list of doomsday scenarios that could wipe out the human race, super-smart killer robots rate pretty high in the public consciousness. And in scientific circles, a growing number of artificial intelligence experts agree that humans will eventually create an artificial intelligence that can think beyond our own capacities. This moment, called the singularity, could create a utopia in which robots automate common forms of labor and humans relax amid bountiful resources. Or it could lead the artificial intelligence, or AI, to exterminate any creatures it views as competitors for control of the Earth—that would be us. Stephen Hawking has long seen the latter as more likely, and he made his thoughts known again in a recent interview with the BBC. Here are some comments by Hawking and other very smart people who agree that, yes, AI could be the downfall of humanity. Stephen Hawking “The development of full artificial intelligence could spell the end of the human race,” the world-renowned physicist told the BBC. “It would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” Hawking has been voicing this apocalyptic vision for a while. In a May column in response to Transcendence, the sci-fi movie about the singularity starring Johnny Depp, Hawking criticized researchers for not doing more to protect humans from the risks of AI. “If a superior alien civilisation sent us a message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here—we’ll leave the lights on’? Probably not—but this is more or less what is happening with AI,” he wrote. Elon Musk Known for his businesses on the cutting edge of tech, such as Tesla and SpaceX, Musk is no fan of AI. At a conference at MIT in October, Musk likened improving artificial intelligence to “summoning the demon” and called it the human race’s biggest existential threat. He’s also tweeted that AI could be more dangerous than nuclear weapons. Musk called for the establishment of national or international regulations on the development of AI. Nick Bostrom The Swedish philosopher is the director of the Future of Humanity Institute at the University of Oxford, where he’s spent a lot of time thinking about the potential outcomes of the singularity. In his new book Superintelligence, Bostrom argues that once machines surpass human...

Read More

Richard Feynman on How Computers Think [or Not]

Shareby SOCRATES This is a classic Richard Feynman – 1965 Nobel Prize Laureate in Physics, video lecture on how computers think [or not]. As always Feynman gives us an insightful presentation about computer heuristics: how computers work, how they file information, how they handle data, how they use their information in allocated processing in a finite amount of time to solve problems and how they actually compute values of interest to human beings. These topics are essential in the study of what processes reduce the amount of work done in solving a particular problem in computers, giving them speeds of solving problems that can outmatch humans in certain fields but which have not yet reached the complexity of human driven intelligence. The question if human thought is a series of fixed processes that could be, in principle, imitated by a computer is a major theme of this lecture and, in Feynman’s trademark style of teaching, gives us clear and yet very powerful answers for this field which has gone on to consume so much of our lives today. No doubt this lecture will be of crucial interest to anyone who has ever wondered about the process of human or machine thinking and if a synthesis between the two can be made without violating logic. My favorite quote from this Richard Feynman video is his definition of a computer: “A glorified, high-class, very fast but stupid filing system.” See more here:Richard Feynman on How Computers Think [or...

Read More