IE 11 is not supported. For an optimal experience visit our site on another browser.

'Relatively few' Twitter bots were needed to spread misinformation and overwhelm fact checkers, study finds

“Bots amplify the reach of low-credibility content, to the point that it is statistically indistinguishable from that of fact-checking articles,” researchers wrote.
Cyber Security Concerns In The Global Wake of Hacking Threat
Leon Neal / Getty Images

A small number of automated Twitter accounts were able to play an outsized role in spreading content from low-quality publishers and overwhelm the efforts of fact checkers, according to a new study that highlights how the sizable platform was able to be manipulated by bad actors.

The research study, conducted by Indiana University researchers and published on Tuesday in the academic journal Nature Communications, found that “relatively few accounts are responsible for a large share of the traffic that carries misinformation,” with just 6 percent of Twitter accounts identified as bots responsible for 31 percent of “low-credibility” content.

“Bots amplify the reach of low-credibility content, to the point that it is statistically indistinguishable from that of fact-checking articles,” researchers wrote.

The study comes after Twitter has moved to crack down on fake and automated accounts after it was found that a Russia-based foreign influence campaign was able to spread misinformation and divisive political rhetoric during the 2016 U.S. election. Twitter has removed tens of millions of accounts in 2018.

A spokesperson for Twitter pointed NBC News to the company’s blog post from June in which it detailed its efforts to fight bots.

The study analyzed 14 million tweets that linked to more than 400,000 articles from May 2016 until the end of March 2017. Of those articles, 389,569 were from “low credibility sources” that had been repeatedly flagged by fact-checking organizations for containing misinformation, as well as 15,053 articles that originated from “fact-checking sources.”

Of that sample, over 13.6 million tweets linked to “low-credibility sources” and around 1.1 million tweets linked to known fact-checking sources, leading researchers to attribute greater virality with “fake news.”

To achieve maximum exposure, the study found that “social bots” used two methods to manipulate users into trusting the linked article’s validity.

“First, bots are particularly active in amplifying content in the very early spreading moments, before an article goes ‘viral,’” researchers wrote. “Second, bots target influential users through replies and mentions.”

The researchers noted how one social bot mentioned President Donald Trump’s official twitter account, @realDonaldTrump, in 19 twitter posts that each linked to one article that falsely claimed undocumented immigrants submitted millions of votes. Researchers suggested that this is done by bots hoping “these targets will then reshare the content to their followers, thus boosting its credibility.”

Users struggled to differentiate bots from other human users, as humans “have retweeted bots who post low-credibility content almost as much as they retweet other humans,” according to the researchers.

The researchers noted that social media platforms have moved to address the spread of misinformation by bots, but said “their effectiveness is hard to evaluate.”

Josh Russell, an independent researcher who studies misinformation campaigns, said the study highlighted the need for Twitter to act quickly in taking down fake accounts.

“Twitter has been great at taking down user-reported bots,” Russell said via a Twitter direct message. “They just need to improve the detection system.”