IE 11 is not supported. For an optimal experience visit our site on another browser.

What will happen when machines outthink us?

Techies and scientists at the Singularity Summit say now is the time to prepare for a future in which artificial intelligence surpasses human intelligence - and in which it may be hard to tell one from the other.
/ Source: The Associated Press

At the center of a black hole there lies a point called a singularity where the laws of physics no longer make sense.

In a similar way, according to futurists gathered Saturday for a weekend conference, information technology is hurtling toward a point where machines will become smarter than their makers. If that happens, it will alter what it means to be human in ways almost impossible to conceive, they say.

"The Singularity Summit: AI and the Future of Humanity" brought together hundreds of Silicon Valley techies and scientists to imagine a future of self-programming computers and brain implants that would allow humans to think at speeds nearing today's microprocessors.

Artificial intelligence researchers at the summit warned that now is the time to develop ethical guidelines for ensuring these advances help rather than harm.

"We and our world won't be us anymore," Rodney Brooks, a robotics professor at the Massachusetts Institute of Technology, told the audience. When it comes to computers, he said, "who is us and who is them is going to become a different sort of question."

Eliezer Yudkowsky, co-founder of the Palo Alto-based Singularity Institute, which organized the summit, focuses his research on the development of so-called "friendly artificial intelligence." His greatest fear, he said, is that a brilliant inventor creates a self-improving but amoral artificial intelligence that turns hostile.

T-minus 22 years?
The first use of the term "singularity" to describe this kind of fundamental technological transformation is credited to Vernor Vinge, a California mathematician and science-fiction author.

High-tech entrepreneur Ray Kurzweil raised the profile of the singularity concept in his 2005 book "The Singularity is Near," in which he argues that the exponential pace of technological progress makes the emergence of smarter-than-human intelligence the future's only logical outcome.

Kurzweil, director of the Singularity Institute, is so confident in his predictions of the singularity that he has even specified the year when artificial intelligence will surpass human smarts: 2029. The singularity would come some years later, around 2045, he says.

Most "singularists" feel they have strong evidence to support their claims, citing the dramatic advances in computing technology that have already occurred over the last 50 years.

In 1965, Intel co-founder Gordon Moore accurately predicted that the number of transistors on a chip should double about every two years. By comparison, singularists point out, the entire evolution of modern humans from primates has resulted in only a threefold increase in brain capacity.

With advances in biotechnology and information technology, they say, there's no scientific reason that human thinking couldn't be pushed to speeds up to a million times faster.

Is the ‘nerdocalypse’ near?
Some critics have mocked singularists for their obsession with "techno-salvation" and "techno-holocaust" _ or what some wags have called the coming "nerdocalypse." Their predictions are grounded as much in science fiction as science, the detractors claim, and may never come to pass.

But advocates argue it would be irresponsible to ignore the possibility of dire outcomes.

"Technology is heading here. It will predictably get to the point of making artificial intelligence," Yudkowsky said. "The mere fact that you cannot predict exactly when it will happen down to the day is no excuse for closing your eyes and refusing to think about it."