IE 11 is not supported. For an optimal experience visit our site on another browser.

Cooperating Mini-Brains Show How Intelligence Evolved

Working together can hasten brain evolution, according to a new computer simulation.
/ Source: LiveScience

Working together can hasten brain evolution, according to a new computer simulation.

When programmed to navigate challenging cooperative tasks, the artificial neural networks set up by scientists to serve as mini-brains "learned" to work together, evolving the virtual equivalent of boosted brainpower over generations. The findings support a long-held theory that social interactions may have triggered brain evolution in human ancestors.

"It is the transition to a cooperative group that can lead to maximum selection for intelligence," said study researcher Luke McNally, a doctoral candidate at Trinity College Dublin. Greater intelligence, in turn, leads to more sophisticated cooperation, McNally told LiveScience. [ 10 Fun Brain Facts ]

It also leads to more sophisticated means of cheating, he added.

Virtual neurons

McNally and his colleagues used artificial neural networks as virtual guinea pigs to test the social theory of brain evolution. These networks are the numerical equivalent of very simple brains. They're arranged in nodes, with each node representing a neuron.

"In the same way that neurons excite each other via signals [in the brain], these nodes pass numbers to each other, which then decides the activity of the next node," McNally said.

The neural networks are programmed to evolve, as well. They reproduce, and random mutations can introduce extra nodes into their networks. Just as in real-world evolution, if those nodes are beneficial to the network, it will be more likely to succeed and reproduce again, passing on the extra brain boost.

The researchers assigned two different games for these networks to play, each an analogy for different social interactions. One, called the Prisoner's Dilemma, puts its participants in a scenario where cooperation is best for both parties but they still may be motivated to freeload. In the scenario, two suspects have been arrested for a crime. The police offer both a deal: Snitch on your partner and we'll give both of you a medium-length sentence. If you don't snitch, we'll easily convict you for a lesser crime, and you'll have to spend at least a little time in jail. But if you don't snitch and the other prisoner does, you're taking the fall — and you'll be in prison for a long time.

It's best for both parties to keep quiet, but each may be tempted to take the risk of snitching and hoping their partner is more noble.

In a second scenario, the snowdrift game, two partners have to work together to dig out of a snowdrift. The best choice from the point of view of one partner is to let the other do all the digging. But if both partners choose this route, neither is will get out of the snowdrift. 

Artificial neural networks don't understand prisons or snowdrifts, of course, but they can be made to mathematically "play" these games, with winners getting a numerical payoff for avoiding a prison sentence or digging out of the snow. McNally and his colleagues set up 10 experiments in which 50,000 generations of neural networks got to work out these games. Intelligence was measured by the number of nodes added in each network as the players evolved over time. [ 10 Ways to Keep Your Mind Sharp ]

Artificial brain boom

The simulations proved quite good at both the Prisoner's Dilemma and the snowdrift game, McNally said. They evolved strategies just like those seen when humans play these games with other humans.

But the game-playing strategies weren't constant over time. As random "mutations" in the program yielded networks with more nodes (an analogy for more intelligence), cooperation began to pick up. And as soon as cooperation began, the evolutionary pressure for big brains skyrocketed.

"When the society begins to evolve from a scenario of low cooperation, initially, toward a more cooperative scenario, that's when we got the maximum solution for intelligence," McNally said. In other words, networks with more nodes were more successful at the games and thus "lived on" to reproduce increasingly large virtual brains.

This feedback loop continued, McNally said, with larger brains begetting a "Machiavellian arms race" in which some neural network would figure out how to freeload, or cheat, in the two games, which in turn prompted other neural network to "learn" how to detect cheaters and outwit them. A clever neural network might work by starting out its interactions with another network cooperatively, only to turn on its partner and start cheating, for example.

The neural networks are nowhere as complex as the human brain, McNally said, but virtual experiments provide a way to watch basic evolution in action without waiting millions of years. He and his colleagues are now collecting data of various primate species to investigate the link between brain size (the proxy for intelligence used in this study) and actual intelligence.

"What this indicates is that in species ancestral to humans, it could have been the transition to more-cooperative societies that drove the evolution of our brains," McNally said. "It's confirming that this old idea does work and holds water."

You can follow LiveScience  senior writer Stephanie Pappas on Twitter. Follow LiveScience for the latest in science news and discoveries on Twitter @livescience  and on.