IE 11 is not supported. For an optimal experience visit our site on another browser.

Tech companies struggle with the human side of fake news

Fake news is a human problem. And Silicon Valley is not good at solving human problems.
Fake news words typed on vintage typewriter.
Tech companies are built to spread information quickly but that speed poses a serious problem when it comes to countering misinformation.Cn0ra / Getty Images

Fake news is a human problem. And Silicon Valley is not good at solving human problems.

With tech platforms built to supercharge the sharing of information, tech companies, journalists, and academics are scrambling for a solution to the spread of misinformation on the internet.

A study published last week in the academic journal Science provided a stark reminder of the challenge posed by misinformation — and why modern technology is making it worse. The study found that it took true information on Twitter six times as long as misinformation to reach 1,500 people.

For all the concern about Russian trolls and Twitter bots, the study found that real people remain the biggest conduit for fake news.

"You can see the viral chain of this information," said Jeremy Littau, a professor of journalism and communication at Lehigh University. "I shared from this person I trust who shared from this person they trust...It’s a recipe ripe for exploitation for people who make fake news."

So far, nothing has been able to stop this people problem, and Silicon Valley is running out of ideas. In an unprecedented move, Twitter CEO Jack Dorsey recently announced the company will start taking outside proposals to deal with “collective health, openness, and civility of public conversation, and to hold ourselves publicly accountable towards progress.”

No quick fixes

Facebook, which has been shown to be an epicenter for the spread of misinformation, has rolled out a variety of tools and features. None of them appear to have been successful. Facebook ditched its fake news warning flag last December after testing it for one year.

Turns out, those kinds of flags can have the opposite of the intended effect.

"Academic research on correcting misinformation has shown that putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs – the opposite effect to what we intended," Tessa Lyons, a Facebook product manager, said in a blog post.

Alex Hardiman, Facebook’s head of news product said during a panel discussion at South by Southwest, an annual tech conference, that the company is considering drastic measures.

"As we are making the shift to quality, everything is on the table," she said.

Facebook’s previous approach did not distinguish between news sources and “that was problematic,” she said. “"Flattening the news meant you couldn't always tell the difference between something that was trusted and credible, versus something fraudulent."

A few days later, Adam Mosseri, the head of Facebook’s Newsfeed product, admitted that people at the company are seriously concerned about reports that Facebook played a role in violence against Rohingya Muslims in Myanmar.

“We lose some sleep over this,” he said on Slate’s “If Then” podcast.

Twitter is also working on its problems. The company said that it’s figuring out how to boost “conversational health” by showing more tweets from reliable news partners.

Those efforts have been mostly cheered, but there is also concern that handing too much authority to tech companies could backfire. Most companies tried to take a hands off approach with the information posted to their platforms. Now, there’s a question of just how far they’ll go.

"We are all super concerned about these middle spaces,” said Jen Golbeck, a professor at the University of Maryland's College of Information Studies. “There's a lot of misinterpretation, taking stuff out of context. We want to be really careful about censoring that. We're not interested in creating a truth-only platform because who is deciding the truth?"

Contradiction bias

Tech companies are built to spread information quickly and cheaply, and they’ve gotten very good at that.

But that speed and ease poses a serious problem when it comes to countering misinformation. Golbeck said that by the time people read something wrong, the damage is already done.

“Psychological studies have shown that "calling out conspiracy theories with the truth makes people believe false stuff more," she said.

Facebook, Twitter and YouTube have taken some action to stop harassment and abuse at the source by banning accounts. But when it comes to misinformation, they have been hesitant or slow to take similar action.

Even relatively swift responses are lacking. YouTube pulled several videos that claimed Parkland shooting survivor David Hogg was a “crisis actor.” By the time the platform took action, the videos had already reached the top of YouTube’s “Trending” section and logged millions of views and had become a talking point on the far right.

The Feds

If big tech companies didn’t view the problem of misinformation as important enough to fix, there’s a chance that the prospect of government regulation — something these platforms have had very little of — will get them to act.

With a perfect storm of frustration among politicians, social media companies may be racing the regulators, particularly in Europe, where numerous countries are pushing ahead with aggressive new rules.

A few months ago, Germany began requiring some social media sites to remove abusive posts no later than 24 hours after being flagged or face hefty fines as high as 50 million euros. France is weighing a rapid response law that would let lawmakers act quickly to stop a fake news story when it goes viral.

At South by Southwest, one of the year’s biggest tech conferences, London's Mayor Sadiq Khan used his keynote to suggest that if big tech doesn't act fast enough, they should expect more countries to follow in Germany and France's lead.

Without big tech stepping up to find a solution, it's possible the world will see "regulation that is quite draconian from politicians and policymakers, or you'll have consumers walking away from these platforms," Khan told NBC News. "I don't want either. That's why it's important that giant tech companies act responsibly and work with us to try and find solutions."

New hopes

There have been journalistic approaches by internet entrepreneurs hoping to lend their expertise by creating organizations offering a community of news gatekeepers acting as sheriffs on the Wild West of the internet.

Jimmy Wales, the co-founder of Wikipedia, is piloting WikiTribune, a reader supported news source that allows anyone to fact check or flag something in an article.

"Ads are cheap, competition for clicks is fierce and low quality news sources are everywhere," Wales said in a campaign video. "Social media, where most people get their news these days, is literally designed to show us what we want to see, to confirm our biases and to keep us clicking at all costs. It fundamentally breaks the news, and the truth is, on the internet, no one is guarding the gate."

Steve Brill, a media entrepreneur who founded American Lawyer and Court TV, and Gordon Crovitz, a media executive, announced their NewsGuard venture earlier this month.

“Our goal is to help solve this problem now by using human beings—trained, experienced journalists—who will operate under a transparent, accountable process to apply basic common sense to a growing scourge that clearly cannot be solved by algorithms,” Brill said in a statement.

While humans spreading fake news is a problem and every effort to stop it may have the best of intentions, perhaps the most worthwhile effort, experts suggest, is teaching a new generation about news literacy.

"The issue really becomes one... of social literacy and teaching people how to evaluate sources," Golbeck said. "If you look at elementary and high school, it's something we're not doing a great job of yet."

Littau said news literacy is one of the "greatest generational challenges" people need to address.

He's already started teaching his first grade son how to evaluate sources of information for articles he reads or videos he watches on Youtube.

"One of the things I try to challenge people to do is to think about their motivations for sharing," he said. "Be very skeptical with something that agrees with what you already think. You are more likely be duped by stuff you agree with than stuff you don't."

News literacy is going to be a crucial skill for the next generation and without it, Littau foresees an even greater mess than the one currently in existence.

"I don't want to sound too dramatic," he said. "But I think our democracy is in the line."