Scientists say they've made a breakthrough in their pursuit of computers that "think" like a living thing's brain — an effort that tests the limits of technology.
Even the world's most powerful supercomputers can't replicate basic aspects of the human mind. The machines can't imagine a wall painted a different color, for instance, or picture a person's face and connect that to an emotion.
If researchers can make computers operate more like a brain thinks — by reasoning and dealing with abstractions, among other things — they could unleash tremendous insights in such diverse fields as medicine and economics.
A computer with the power of a human brain is not yet near. But this week researchers from IBM Corp. are reporting that they've simulated a cat's cerebral cortex, the thinking part of the brain, using a massive supercomputer. The computer has 147,456 processors (most modern PCs have just one or two processors) and 144 terabytes of main memory — 100,000 times as much as your computer has.
The scientists had previously simulated 40 percent of a mouse's brain in 2006, a rat's full brain in 2007, and 1 percent of a human's cerebral cortex this year, using progressively bigger supercomputers.
Watching thoughts form
The latest feat, being presented at a supercomputing conference in Portland, Ore., doesn't mean the computer thinks like a cat, or that it is the progenitor of a race of robo-cats.
The simulation, which runs 100 times slower than an actual cat's brain, is more about watching how thoughts are formed in the brain and how the roughly 1 billion neurons and 10 trillion synapses in a cat's brain work together.
The researchers created a program that told the supercomputer, which is in the Lawrence Livermore National Laboratory, to behave how a brain is believed to behave. The computer was shown images of corporate logos, including IBM's, and scientists watched as different parts of the simulated brain worked together to figure out what the image was.
Dharmendra Modha, manager of cognitive computing for IBM Research and senior author of the paper, called it a "truly unprecedented scale of simulation." Researchers at Stanford University and Lawrence Berkeley National Laboratory were also part of the project.
Modha says the research could lead to computers that rely less on "structured" data, such the input 2 plus 2 equals 4, and can handle ambiguity better, like identifying the corporate logo even if the image is blurry. Or such computers could incorporate senses like sight, touch and hearing into the decisions they make.
One reason that development would be significant to IBM: The company is selling "smarter planet" services that use digital sensors to monitor things like weather and traffic and feed that data into computers that are asked to do something with the information, like predicting a tsunami or detecting freeway accidents. Other companies could use "cognitive computing" to make better sense of large volumes of information.
Jim Olds, a neuroscientist and director of the Krasnow Institute for Advanced Study at George Mason University, called the new research a "tremendous step." Olds, who was not involved in IBM's work, said neuroscientists have been amassing data about how the brain works much like "stamp collectors," without a way to tie it together.
"We've made tremendous advances in collecting data, but we don't have a collective theory yet for how this complex organ called the brain produces things like Shakespeare's sonnets and Mozart's symphonies," he said. "The holy grail for neuroscientists is to map activity from single nerve cells, which they know about, into how billions of nerve cells act in concert."
Modha says a simulation of a human cortex could come within the next decade if Moore's Law holds. That's the rule of thumb that the number of transistors on a computer chip tends to double every two years.
Yet Olds cautioned that simulating the human brain is "such a complex problem that we may not be able to get to an answer, even with supercomputing."
"There are no guarantees in this game because the sheer complexity of the problem really dwarfs anything we've tried to do," he said.