Outrage plays well on social media, and it works even better if you name your enemy.
That's the finding of a new study from researchers at the University of Cambridge and New York University who analyzed 2.7 million posts on Facebook and Twitter. It adds to a growing body of research calling into question the impact of the platforms on America's political discourse.
The latest study looked at posts from news media accounts and members of Congress from 2016 to 2020, and it counted how often the posts included terms that referred to a political "out-group," or opponent. The terms could be the name of a famous politician or a generic term such as "liberal" or "conservative."
What the researchers found was that including such a term raised the odds that people would share the post by 67 percent — not just once, but for each additional term like it in a single post.
The finding is a window into why certain posts are shared so often, and perhaps why online political debates can be so negative.
"We think this cycle, these incentives for virality, are essentially creating a toxic ecosystem," said Steve Rathje, a doctoral student in psychology at the University of Cambridge and the lead author. The study is scheduled for publication this month in the Proceedings of the National Academy of Sciences, and a copy was published online Wednesday.
It's not the first time someone has observed a link between a negative attack and effective attention-grabbing, but Rathje said he and his colleagues were taken aback by the strength of the association.
"This is something that a lot of people's intuition says will happen, but we were still surprised by the size of the effect," he said.
Facebook and Twitter did not respond to requests for comment on the study Friday.
Academic research into the impact of social media on politics has mushroomed over the past five years, since former President Donald Trump used posts on Twitter and ads on Facebook to help him win the White House in an upset in 2016.
Past studies have found that Twitter users who were exposed to opposing views can became even more polarized politically, and some researchers believe social media feedback loops likely reinforce polarization that already exists in U.S. society.
"People want social media to be a competition of ideas, and they think that we log on to search for new information. But it's really a competition of identity, and a lot of what we’re seeking is to increase our status," said Christopher Bail, a sociology professor at Duke University and author of the new book "Breaking the Social Media Prism: How To Make Our Platforms Less Polarizing."
Bail said the latest research is in line with previous findings about social media algorithms and how they encourage people to "dunk" on an adversary — a word that has become synonymous on Twitter with a too-easy verbal takedown of an ideological opponent that can rack up likes and retweets.
But he also said there's still a lot of uncertainty around how much social media sites are contributing to polarization or other perceived ills.
"It could just be a small group of people who are doing most of the commenting, liking and sharing," he said.
A 2019 study by the Pew Research Center found that a relatively small group Twitter users — 6 percent — generated 73 percent of tweets mentioning national politics. Another study released in 2019 found that people who were paid to quit Facebook for a month didn't necessarily become less politically polarized.
Facebook, Twitter, YouTube and other tech companies have taken steps to re-examine their handling of politics, even if they have largely avoided fundamental changes.
The Wall Street Journal reported last year that Facebook's own internal research had shown that in some cases its services make tribal behavior worse but that CEO Mark Zuckerberg and other executives shelved the research and blocked efforts to overhaul the company's products.
Facebook employees last year developed a series of proposed features to improve civility, but the company mostly rejected them, The New York Times reported, because of concerns that the changes would hurt growth or disproportionately affect readers of right-wing websites.
Twitter has experimented with tweaks to its platform, such as as pre-emptively debunking false information about voting in a feature called "pre-bunks" and prompting people to reconsider their words if they're about to send a rude reply.