IE 11 is not supported. For an optimal experience visit our site on another browser.

Twitter suspended dozens of accounts. But were they Russian? It's hard to tell.

National security professionals, tech companies and disinformation researchers are on high alert for influence operations interfering in the election.
Image: Twitter
Employees walk past a lighted Twitter log as they leave the company's headquarters in San Francisco on Aug. 13, 2019.Glenn Chapman / AFP via Getty Images file

In May, Twitter suspended 44 accounts that disinformation experts said mirrored known Russian election interference tactics.

But determining exactly who was behind the accounts is a challenge — one that researchers continue to warn can be fraught.

The previously unreported inauthentic accounts were uncovered by researchers Darren Linvill and Patrick Warren of Clemson University in South Carolina using a system of qualitative and quantitative data analysis they've developed that can sift through huge numbers of tweets to identify potential foreign influence operations. Twitter confirmed in May that it suspended the accounts "for platform manipulation and spam," but it wouldn't comment on whether they were state-sponsored.

National security professionals, tech companies and disinformation researchers are on high alert for influence operations interfering in the election. And while many of them expect new tactics, there is concern that the strategy of using fake accounts to spread misinformation and politically divisive rhetoric is still being used. The office of the director of national intelligence said in a recent statement that Russian operations were continuing and that Russia was "using a range of measures to primarily denigrate former Vice President Biden."

During two recent news events, the NBA walkout and the protests in Kenosha, Wisconsin, Twitter users flagged accounts retweeting similar phrases. But that alone isn't enough to label an account a "bot" or a "troll."

The information that can be used to trace the origins of accounts is privately held by the tech companies, leaving researchers to try to watch for subtle signatures in the way fake accounts act for hints about their operators. They look for indicators that, when added up, make the accounts stand out from almost any other organic user behavior.

The accounts flagged by the Clemson researchers used the same overall strategies the U.S. government identified when it filed charges in 2018 against Kremlin-backed operators of social media accounts it accused of interfering with the 2016 presidential election, including supporting Donald Trump and attacking Democratic candidates. As in 2016, the accounts also supported Sen. Bernie Sanders, I-Vt., and continued to do so after he dropped out of the primaries.

The accounts, which were removed in March and May and were analyzed by experts for NBC News over the past months, adopted the personas of American activists across the spectrum on wedge issues, from Trump-supporting Midwesterners to gun rights activists, LGBTQ people to Blacks and Latinos, and expressed support for positions on the political edges. They also retweeted a set of accounts operating out of Ghana that was uncovered by the Clemson researchers and removed in March by Twitter and Facebook for being linked to Russia.

The accounts struggled with English grammar in ways that have been observed in other Russian operations and wouldn't be seen from native speakers, such as word order, not using "a" or "the" and difficulty with the possessive case. Some used artificial intelligence-generated and stolen profile photos and listed their operators as living in swing states. Several of the accounts reused text and memes previously posted by known Russian troll accounts, and they copied and pasted text from the viral tweets of authentic users, a behavior seen in a set of 50 Russian-linked Instagram accounts that Facebook suspended last October.

But without confirmation from the platforms and the government, researchers can't definitively conclude whether Russia was behind the accounts or whether it was a nation-state at all.

Young Mie Kim, a journalism professor at the University of Wisconsin-Madison who has studied misinformation networks on social media, said the accounts "look like coordinated behavior and share some similar traits with Russian tactics," but she cautioned about making firm attributions.

"My definition of a 'Russian account' are only ones confirmed by government records," an approach that necessarily underestimates the total number of accounts, she said.

To make a final determination of who may be behind an account, researchers rely on social media platforms and governments for disclosure. While they are also trying to navigate a new, evolving collaborative counter-disinformation world with researchers, journalists, the public and one another, and while they have their own agendas and concerns, they also have the upper hand in controlling a potentially damaging narrative about the extent of malicious foreign activity.

Social media companies will enforce takedown policies on suspicious accounts long before they're identified as being part of information operations, if they ever are.

"Attribution is always the hardest part of an investigation, and it's important to be cautious: You don't want to attribute an operation to the wrong actor," said Ben Nimmo, a disinformation researcher at the social media analysis firm Graphika.

As for the Clemson researchers who found the accounts, they said whoever was operating them had been taking the time to slowly build followings.

"They are not out there looking for a fight. These accounts were accruing audiences that agreed with the persona of the account," Linvill said. "They weren't trying to push you away. They were trying to pull you with them."