Senate intel committee grapples with social media's threat to democracy
Tech companies have been under intense scrutiny to do more to prevent the spread of misinformation and propaganda.
Social media experts are greeted by Chairman Richard Burr (R-NC), right, and ranking member Sen. Mark Warner (D-VA), during a Senate Intelligence Committee hearing on Capitol Hill on Aug. 1, 2018 in Washington.Mark Wilson / Getty Images
Breaking News Emails
Get breaking news alerts and special reports. The news and stories that matter, delivered weekday mornings.
Social media and technology experts on Wednesday told the Senate intelligence committee that Russia and other foreign actors show few signs of slowing their efforts to spread misinformation and propaganda — and that tech companies aren’t doing enough to neutralize such efforts.
Tech companies — most notably Facebook, Google and Twitter — have been under intense scrutiny to do more to prevent the spread of misinformation and propaganda, after Russia-linked groups were found to have used those platforms to push divisive political messages.
The challenge, said Sen. Mark Warner, D-Va., vice chairman of the committee, is to figure out how to crack down on misinformation campaigns without infringing on civil liberties and free speech.
"Foreign operatives … almost by design slip between our free speech guarantees and our legal authorities," Warner said.
It was the committee’s third official hearing focusing on social media and foreign influence, though there have been more than a dozen hearings touching on the topic, according to a spokesperson for Warner.
The hearing came a day after Facebook disclosed that it had discovered and removed 32 pages tied to a new covert campaign to spread divisive political content. The company declined to name who was responsible, but experts and lawmakers say the campaign’s tactics and tools mirror those deployed by the Kremlin.
Central to the hearing was the idea that Russia exploited the hesitance of tech companies to regulate what is posted on their platforms. Tech companies are not legally liable for what people post because of the Communications Decency Act of 1996, Section 230 of which states that online platforms have “safe harbor” from liability because they are considered pipes for the distribution of information under the law, not publishers themselves.
The idea that tech companies should not be held liable has been under pressure after revelations that Russia-connected groups were able to spread disinformation to millions of Americans.
Byers Market Newsletter
Get breaking news and insider analysis on the rapidly changing world of media and technology right to your inbox.
Sen. Ron Wyden, D-Ore., who helped create that standard, had stern words for tech companies that rely on that legal protection.
“As the author of Section 230, the days when these pipes are considered neutral are over,” Wyden said, “because the whole point of 230 was to have a shield and a sword, and the sword hasn’t been used and these pipes are not neutral.”
Experts who testified on Wednesday took the social media platforms to task, highlighting the failure of the public and private sector to cooperate in combating the problem, and stressing the urgency of devising workable solutions.
“This is one of the defining threats of our generation,” said Renee DiResta, director of research at New Knowledge, a company that identifies social media disinformation. “Platforms need to be held accountable for private ownership of our public squares.”
It’s likely that attacks by Russia and other foreign powers will only get more devious and harder to detect, using more deceptive methods (like sockpuppets, false online identities designed to deceive), witting and unwitting participants, smaller platforms, artificial intelligence, and fake audio and video, DiRestra said.
The hearing Wednesday released new examples of the most effective and widely shared content created and distributed by the Kremlin-connected Internet Research Agency “troll farm” that operates across multiple social media platforms.
The most-shared meme on Facebook featured a picture of the cartoon character Yosemite Sam carrying two six-shooters on a background of the Confederate flag.
“I was banned from television for being too violent,” read the caption. “Like & share, if you grew up watching me on television, have a gun, and haven't shot or killed anyone!" stamped with the logo for the deleted Russian Facebook page “South United.” The image, circulated in early March 2016, drew 986,203 engagements, according to the materials released by the intelligence committee.
Another image featured what is described as a homeless veteran and said, “Like & share if you think our veterans must get benefits before refugees.” That meme, posted Sept. 8, 2016, drew 723,750 engagements for the now-deleted group “Being Patriotic”.
The hearing sought input on technical and regulatory solutions that could start to tackle the problem.
Laura Rosenberger, director of the Alliance for Securing Democracy at the German Marshall Fund, an international policy think tank, suggested that platforms could authenticate their users while using technical or third-party tools to protect user privacy and anonymity.
She also said that while the platforms may not have seen or expected the attacks in 2016, they no longer had any excuse for not taking action.
“What was once a failure to imagine is a failure to act,” said Rosenberger.
Philip Howard, director of the Oxford Internet Institute, said platforms need to be more open with sharing data on what goes on while still protecting user privacy.
“It’s the social media firms that have the best access to the information” about what happens on them, Howard said.
In September, top executives from Facebook, Twitter, and Google are scheduled to testify before the Senate Intelligence Committee.