Breaking News Emails

Get breaking news alerts and special reports. The news and stories that matter, delivered weekday mornings.
By Ben Collins

In a private “strategy chat” with more than 40 far-right trolls, one user who tried to create a new Twitter account to spread disinformation ahead of Tuesday’s midterms elections described how he had hit an immediate roadblock: Twitter banned him for deliberately giving out the wrong election date.

“Were they really banning people for saying [vote on] November 7? Lol, whoops,” the user, whose name was a racist joke about Native Americans, wrote. “Maybe that’s what got me shadowbanned.”

The remark, seen by NBC News in a closed chat room used for planning and executing misinformation efforts, suggested that the changes that Twitter has undertaken in the past two years to avoid a repeat of the 2016 U.S. election may be working. Two years ago, the company did little to police misinformation and allowed a Russian influence campaign and politically motivated trolls to thrive.

A screenshot from a private chat in which a social media troll described getting blocked by Twitter from spreading misinformation
A screenshot from a private chat in which a social media troll described getting blocked by Twitter from spreading misinformation

But the trolls are also learning from their mistakes and developing new strategies to sidestep Twitter’s rules — sometimes with new technology available on other apps — highlighting the arms race between these groups and social media companies that are developing systems to stop them.

While much of its focus has been on foreign operations, Twitter has ramped up preventative measures against domestic troll networks that organize in private chats to push coordinated disinformation on their platform. On Friday, Twitter revealed it took down 10,000 accounts that discouraged voting, mostly accounts posing as Democrats.

NBC News saw some of those accounts in action when an NBC reporter was mistakenly invited to private chats on Twitter and in the gaming chat room service Discord, where some of those talking points and strategies for spreading disinformation were workshopped in the last month.

Many of those strategies, including encouraging Democrats to vote one day after election day, were stopped by an algorithm before they reached users on Twitter.

A spokesperson for Twitter pointed NBC News to a series of company blog posts from last month describing updates about rules surrounding fake accounts on Twitter, including plans to ban the “use of stock or stolen avatar photos” and the “use of intentionally misleading profile information.”

“As platform manipulation tactics continue to evolve, we are updating and expanding our rules to better reflect how we identify fake accounts, and what types of inauthentic activity violate our guidelines,” Del Harvey, vice president for trust and safety at Twitter, and Yoel Roth, head of site integrity, wrote in a blog post. “We now may remove fake accounts engaged in a variety of emergent, malicious behaviors.”

Nina Jankowicz, a global fellow who specializes in disinformation at the Wilson Center's Kennan Institute, a research center that studies Russia, called Twitter’s automated troll enforcement “the type of proactive behavior we need to see more of” from social media companies.

“Particularly, it’s interesting to see they’re looking to see efforts around voter suppression — something we’ve seen a lot in the past on social media platforms before election day,” Jankowicz said. “It’s a good use of [artificial intelligence], and I wish Facebook would do more about it.”

Both Twitter and Facebook have devoted “war rooms” to fighting back against election-related disinformation and false conspiracy theories in the run-up to the 2018 midterms. Facebook has publicly struggled with the problem in the last week, as multiple reports have surfaced of viral troll accounts spreading conspiracy theories and false or racist memes.

Many of those memes are created or weaponized in small groups then pushed on Facebook pages or public Twitter accounts to achieve maximum virality.

“That’s where disinformation is thriving now, where there’s no content moderation and no ability to search or see what’s trending,” Jankowicz said. “It’s a real problem.”

In some cases, Twitter’s algorithm could not catch up with persistent trolls working together in private chats. NBC News witnessed trolls developing new strategies on the fly to circumvent the bans. Several were successful in creating unique identities that appeared to be middle-aged women who posted anti-Trump rhetoric as part of a long-term effort to build up followings that could later be used to seed disinformation to hundreds or thousands of followers.

One troll who stole a woman’s identity came up with a plan to skirt reverse image search programs that would show users the real identity of the woman in its stolen profile picture.

“If you want a Twitter pic that is a completely unique photo and not an actual person, use the Snapchat filter where you can layer another face,” said one user. “It will be a completely unique face.”

A screenshot from a private chat room in which social media trolls discuss how to get around Twitter's security.
A screenshot from a private chat room in which social media trolls discuss how to get around Twitter's security.

Other users passed around tutorials, mostly screenshots from the fringe internet message board 4chan, showing techniques that allowed them to evade bans on Twitter and Facebook. Those strategies were used to target Sen. Cory Booker, D-N.J., whom alt-right trolls referred to as “Spartacus,” with a disinformation campaign last month.

Some of the trolls' coordinated campaigns remain on Twitter under the hashtags "#nomenmidterm" and "#letwomendecide," where users posing as Democrats implore liberal men not to vote.