IE 11 is not supported. For an optimal experience visit our site on another browser.

Russia's Social Media Propaganda Was Hiding in Plain Sight

The hearings on Russian disinformation are a bitter coda to a symphony of discontent.
Image: Colin Stretch of Facebook, Sean Edgett of Twitter and Richard Salgado of Google are sworn in prior to testifying before Senate Intelligence Committee in Washington
The writing on the wall. Joshua Roberts / Reuters

A common misconception about Donald Trump’s rise to presidential power — as well as the role Russia-backed social media allegedly played in getting him there — is that no one saw it coming. In reality, scholars, activists, and regular social media users had warned that platforms seemed to becoming weaponized for political purposes in a way that felt new and dangerous. The hearings on Russian disinformation over the past two days are a bitter coda to a symphony of discontent to which tech gurus responded slowly.

They do not have that option anymore. But the bigger question is why they ever thought they did.

Since 2014, women have begged Twitter to stop the mass harassment typified by misogynist campaigns like Gamergate. Several protagonists accused of participating in the harassment, like former Breitbart columnist Milo Yiannopoulos, were tied to the Trump campaign. Also in 2014, black women on Twitter noticed a spate of accounts posing as hostile black users and outed the fakers under the hashtag #YourSlipIsShowing. Three years later, researchers have confirmed such attempts were part of a Russian propaganda operation intended to exacerbate racial tension.

On Monday, Clint Watts, a former FBI Special Agent and social media researcher who has frequently testified in front of Congress about Russian interference allegations, confirmed that 2014 was indeed “a dry run” in which Russia mapped the social media landscape and saw how it could be manipulated. Far from being surprising, Watts’ comments likely felt like confirmation, not revelation, to many female and non-white social media users.

This is not to say that most harassers and trolls were Russian — online abuse is very much a domestic crisis as well. But had Twitter and other networks acted quicker to curb abuse and impersonation in general, any outside manipulation that did occur might have been less effective. (In October, Twitter CEO Jack Dorsey pledged to "take a more aggressive stance in our rules and how we enforce them.")

2014 was “a dry run” in which Russia mapped the social media landscape and saw how it could be manipulated.

Victims of online harassment weren’t alone in having their warnings ignored, however. As Silicon Valley and the State Department celebrated the role of social media in the Arab Spring, scholars like Harvard’s Rebecca Mackinnon and Stanford’s Evgeny Morozov observed that open platforms were also useful to dictators. Indeed, this “networked authoritarianism” as Mackinnon called it, can help propaganda spread more effectively.

Unlike other authoritarian states, Russia appears to have realized early on that a partially open Internet was more advantageous than a completely censored one. Given the Kremlin’s surveillance culture and imperialistic ambitions, it should have surprised no one that it might extend those technological aims toward the West.

In 2015, scholars of Russia noticed an infiltration of what appeared to be Kremlin-backed propaganda on US media platforms, the baffling promotion of then long-shot candidate Donald Trump in Russian state media, and the rise of crude Russian propaganda efforts like “Heart of Texas”, a Texas secessionist Facebook group. The group was documented by Russian-speaking journalist Casey Michel beginning in early 2016, but only in August 2017 did Facebook remove it. Today, “Heart of Texas” exists only as evidence presented at congressional hearings.

It is erroneous to say that Russian propaganda sealed Trump’s victory: a conflagration of factors made it possible, with Russia propaganda being just one part. Furthermore, the examples of Russian meddling that we have uncovered largely exploited preexisting social rifts and lent support to notoriously controversial US citizens, like conspiracy theorist Alex Jones. This makes sense when you look back at Clint Watt’s testimony. Having mapped the social media landscape years ago, foreign agents would have known who and what could help them reshape it.

Some Russian efforts, like “Heart of Texas,” were sloppy; others, like inviting Jones to appear on Kremlin-backed media outlet RT, were done in public. (Jones has denied any improper Russian contact.) The most troubling efforts were the believable impersonations of good-faith Americans and the swarms of anonymous bots who attempted to influence the public — spurring protests or causing topics to trend, for example — while remaining largely undetected.

But again, while these tactics were harder to observe, Russia’s overall agenda should not have surprised Congress or social media companies, given the long-running concerns of citizens and scholars knowledgeable both of social media and Russia’s historical geopolitical ambitions.

Image: Examples of Facebook pages are seen, as executives appear before the House Intelligence Committee to answer questions related to Russian use of social media to influence U.S. elections, on Capitol Hill in Washington
Exhibit A.Aaron P. Bernstein / Reuters

Unfortunately, the past few days of Congressional testimony suggests that tech company leaders remain obstinate and opaque. After confirming months ago that America had been targeted in a massive Russian propaganda op — and gradually admitting that the number of people exposed was much higher than initially admitted — Facebook and Twitter still refuse to make public the Russian propaganda accounts, although on Tuesday the House released dozens they had identified during their investigation. This is important, because without this information citizens cannot take stock of how they might have been influenced.

At the end of Tuesday’s hearings, Democratic Senator Jack Reed of Rhode Island told Facebook’s General Counsel Colin Stretch that, under the First Amendment, Americans “who have been deliberately misled by a foreign government” must be notified. Stretch replied that this was such a tremendous undertaking that Facebook might not do it.

This answer is not good enough, especially given the resources companies like Facebook have and the responsibility they have to prevent their platforms from being used as propaganda tools. Facebook itself has acknowledged the need to stamp out "fake news."

For years, social media users called on these companies — which now wield tremendous collective power — to protect people from harm. They did this while supplying ample evidence that the harm existed.

Now that the harm has led to a congressional inquiry involving alleged foreign actors, the old problems of online abuse and impersonation are finally being taken more seriously. But it should have never gotten this far, and an honest and transparent account of what happened is the only way to make sure it stops. Scholars and users have long supplied helpful warnings: Now it is Silicon Valley’s turn.

Sarah Kendzior is a journalist who lives in St. Louis, Missouri and covers politics, the economy and media.