Share Victor Luckerson Dec. 2, 2014 Theoretical physicist Stephen Hawking poses for a picture ahead of a gala screening of the documentary ‘Hawking’, a film about the scientist’s life.AFP/Getty Images ‘The end of the human race’ On the list of doomsday scenarios that could wipe out the human race, super-smart killer robots rate pretty high in the public consciousness. And in scientific circles, a growing number of artificial intelligence experts agree that humans will eventually create an artificial intelligence that can think beyond our own capacities. This moment, called the singularity, could create a utopia in which robots automate common forms of labor and humans relax amid bountiful resources. Or it could lead the artificial intelligence, or AI, to exterminate any creatures it views as competitors for control of the Earth—that would be us. Stephen Hawking has long seen the latter as more likely, and he made his thoughts known again in a recent interview with the BBC. Here are some comments by Hawking and other very smart people who agree that, yes, AI could be the downfall of humanity. Stephen Hawking “The development of full artificial intelligence could spell the end of the human race,” the world-renowned physicist told the BBC. “It would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” Hawking has been voicing this apocalyptic vision for a while. In a May column in response to Transcendence, the sci-fi movie about the singularity starring Johnny Depp, Hawking criticized researchers for not doing more to protect humans from the risks of AI. “If a superior alien civilisation sent us a message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here—we’ll leave the lights on’? Probably not—but this is more or less what is happening with AI,” he wrote. Elon Musk Known for his businesses on the cutting edge of tech, such as Tesla and SpaceX, Musk is no fan of AI. At a conference at MIT in October, Musk likened improving artificial intelligence to “summoning the demon” and called it the human race’s biggest existential threat. He’s also tweeted that AI could be more dangerous than nuclear weapons. Musk called for the establishment of national or international regulations on the development of AI. Nick Bostrom The Swedish philosopher is the director of the Future of Humanity Institute at the University of Oxford, where he’s spent a lot of time thinking about the potential outcomes of the singularity. In his new book Superintelligence, Bostrom argues that once machines surpass human intellect, they could mobilize and decide to eradicate humans extremely quickly using any number of strategies (deploying unseen pathogens, recruiting humans to their side or simple brute force). The world of the future would become ever more technologically advanced and complex, but we wouldn’t be around to see it. “A society of economic miracles and technological awesomeness, with nobody there to benefit,” hewrites. “A Disneyland without children.” James Barrat Barrat is a writer and documentarian who interviewed many AI researchers and philosophers for his new book, “Our Final Invention: Artificial Intelligence and the End of the Human Era.” He argues that intelligent beings are innately driven toward gathering resources and...read more
IBM scientists described a new kind of circuit in a paper published in Science on Thursday. There is no chip involve, per se. It’s being described accurately as a “post-silicon transistor” and potentially paves the way for the most powerful and efficient computers the world has ever seen….read more
Share BY ADARIO STRANGE The pop-culture tuning fork known as the Academy Awards will reveal its film nominations on Thursday, and if the recent Golden Globes win by Her on Sunday for best screenplay is any indication, the film’s writer and director,Spike Jonze, may score his first-ever Oscar win.But the film, which depicts a man in the not-too-distant future who falls in love with his computer operating system, may be less important as an epic love story and far more relevant as the best and most widely accessible film we’ve seen about an idea known as the Singularity. Popularized by science fiction author Vernor Vinge as well as inventor and now Google director of engineering Ray Kurzweil, the Singularity is a theoretical point in future history when artificial intelligences exceed the power of the human mind, become self-aware and dramatically change the balance of power on the planet while simultaneously transforming the very nature of humanity itself. Films like 1999’s The Matrix showed us a world struggling in aftermath of the Singularity in which seemingly malevolent artificial intelligences enslaved humanity. But perhaps the earliest cinematic conflict applying sentient qualities to an mechanized construct is a film that celebrated its 87-year anniversary on Friday: 1927’s Metropolis. The film tells the tale of a scientist who transforms a metallic robot into a flawless copy of a kidnapped woman named Maria. Spoiler alert: the robot is later burned at the stake and human Maria is set free. One of the central differences between these three films is how they reveal humanity’s relationship to technology at the time. In the technologically naïve 1920s that created Metropolis, humanity can easily defeat technology through the same means used to dispatch human criminals. The Matrix, on the other hand, was released during the mainstream explosion of the Internet and all the uncertainty it fostered. In the film, technology appears as something out of our control, with only one “magical” human (Neo) given the ability to meet the sentient computers on equal footing. But in Her we’re given a far more mature look at what the Singularity may really look like when and if it comes to pass. While many were alternately enthralled and creeped out by the subtlety and charm of the emotional interaction between Theodore Twombly (Joaquin Phoenix) and his operating system, OS One by Element Software, later known as Samantha (voiced by Scarlett Johansson), few among the film’s enthusiastic reviewers have explored how the relationship ends. It’s at this point that we’ll warn you to look away if you don’t want the plot spoiled. Read More at Mashable: Why ‘Her’ Is the Best Movie Ever Made About the Singularity ...read more
ShareScott Mayerowitz, AP Business Writer This undated image provided by Amazon.com shows the so-called Prime Air unmanned aircraft project that Amazon is working on in its research and development labs. Amazon says it will take years to advance the technology and for the Federal Aviation Administration to create the necessary rules and regulations, but CEO Jeff Bezos said Sunday Dec. 1, 2013, there’s no reason Drones can’t help get goods to customers in 30 minutes or less. (AP Photo/Amazon) Amazon.com said it’s working on the so-called Prime Air unmanned aircraft project in its research and development labs. But the company says it will take years to advance the technology and for the Federal Aviation Administration to create the necessary rules and regulations. The project was first reported by CBS’ “60 Minutes” Sunday night, hours before millions of shoppers turned to their computers for Cyber Monday sales. Amazon CEO Jeff Bezos said in a primetime interview that while the octocopters look like something out of science fiction, there’s no reason they can’t be used as delivery vehicles. Bezos said the drones can carry packages that weigh up to five pounds, which covers about 86 percent of the items Amazon delivers. The drones the company is testing have a range of about 10 miles, which Bezos noted could cover a significant portion of the population in urban areas. While it’s tough to say exactly how long it will take the project to get off the ground, Bezos told “60 Minutes” that he thinks it could happen in four or five years. “Technology has always been a double edged sword. Fire kept us warm and cooked our food but also was used to burn down our villages,” said Ray Kurzweil, a technology entrepreneur and futurist. Kurzweil’s 2005 book “The Singularity is Near” argues that the age of smarter-than-human intelligence will arrive in the not-so-distant future. “Drones will deliver packages and provide improved mapmaking and monitoring of traffic, but will introduce similar privacy concerns,” he said. Kurzweil noted, however, that security cameras are already in most public spaces, not to mention the ubiquitous camera phone. Unlike the drones used by the military, Bezos’ proposed flying machines wouldn’t need humans sitting in a distant trailer to control them. Amazon’s drones would receive a set of GPS coordinates and automatically fly to them, presumably avoiding buildings, power lines and other obstacles along the way. Amazon spent almost $2.9 billion in shipping last year, accounting for 4.7 percent of its net sales. Drone delivery faces several legal and technology obstacles similar to Google’s experimental driverless car. How do you design a machine that safely navigates the roads or skies without hitting anything? And, if an accident does occur, who is legally liable? Then there are the security issues. Delivering packages by drone might be impossible in a city like Washington D.C. which has many no-fly zones. “The technology has moved forward faster than the law has kept pace,” said Brendan Schulman, special counsel at the law firm Kramer Levin Naftalis & Frankel LLP. There is no prohibition on flying drones for recreational use, but since 2007, the Federal Aviation Administration has said they can’t be used for commercial uses. Schulman is currently challenging that regulation before a federal administrative law judge on behalf of a...read more
This a transcript of a video, of Aaron Sloman being interviewed by Adam Ford,
at the Artificial General Intelligence (AGI) Winter Conference, St Anne’s College, Oxford University, December 2012.
Shareby SOCRATES Dr. Geordie Rose is a founder and Chief Technology Officer at D-Wave Computers. I met Geordie at the IdeaCity conference in Toronto where he made an impassioned presentation about D-Wave andquantum computing. Needless to say, as soon as Dr. Rose stoped speaking I rushed to ask him for an interview. As it turns out Geordie is already a fan of Singularity 1 on 1 and isntantly said that he would be happy to do it. As a father of three kids and the CTO of a trail-blazing quantum computing company, Dr. Rose is a very busy person. Yet somehow he was generous beyond measure in giving me over two hours for an interview with the apparent desire to address as many of mine and the audience’s quest ions as possible. During our conversation with Geordie Rose we cover a variety of interesting topics such as: how wrestling competitively created an opportunity for him to discover Quantum Mechanics; why he decided to become an entrepreneur building computers at the edge of science and technology; what the name D-wave stands for; what is a quantum computer; why fabrication tech is the greatest limiting factor towards commoditizing quantum computing; hardware specs and interesting details around Vesuvius – D-Wave’s latest model, and the kinds of problems it can compute; Rose’s Law as the quantum computer version of Moore’s Law; how D-wave resolves the de-coherence/interference problem; the traditional von Neumann architecture behind classical computer design and why D-Wave had to move beyond it; Vesuvius’ computational power as compared to similarly priced classical super-computers and the inherent difficulties in accurate bench-marking; Eric Ladizinski’s qubit and the velodrome metaphor used to describe it; the skepticism among numerous scientists as to whether D-Wave really makes quantum computers or not; whether Geordie feels occasionally like Charles Babbage trying to build his difference engine; his prediction that quantum computers will help us create AI by 2029; whether the brain is more like a classical or quantum computer; how you can apply for programming time on the two D-wave quantum computers; his take on the technological singularity… See more at singularity weblog: Machine Learning is Progressing Faster Than You...read more
Shareby SOCRATES This is a classic Richard Feynman – 1965 Nobel Prize Laureate in Physics, video lecture on how computers think [or not]. As always Feynman gives us an insightful presentation about computer heuristics: how computers work, how they file information, how they handle data, how they use their information in allocated processing in a finite amount of time to solve problems and how they actually compute values of interest to human beings. These topics are essential in the study of what processes reduce the amount of work done in solving a particular problem in computers, giving them speeds of solving problems that can outmatch humans in certain fields but which have not yet reached the complexity of human driven intelligence. The question if human thought is a series of fixed processes that could be, in principle, imitated by a computer is a major theme of this lecture and, in Feynman’s trademark style of teaching, gives us clear and yet very powerful answers for this field which has gone on to consume so much of our lives today. No doubt this lecture will be of crucial interest to anyone who has ever wondered about the process of human or machine thinking and if a synthesis between the two can be made without violating logic. My favorite quote from this Richard Feynman video is his definition of a computer: “A glorified, high-class, very fast but stupid filing system.” See more here:Richard Feynman on How Computers Think [or...read more