IE 11 is not supported. For an optimal experience visit our site on another browser.

Facebook will steer users who interact with coronavirus misinformation to WHO

The move is just the most recent step in an aggressive and coordinated response by Facebook and other tech companies to promote facts and guidance from reputable sources.
Image: An attendee takes a photograph of a sign during Facebook Inc's F8 developers conference in San Jose, California
An attendee takes a photograph of a sign during Facebook Inc.'s F8 developers conference in San Jose, Calif., on April 30, 2019.Stephen Lam / Reuters file

Facebook will begin to alert users after they've been exposed to misinformation about the coronavirus, the company announced Thursday, the latest in a series of actions to curtail the spread of wrong or misleading claims related to the pandemic.

Users who have liked, commented on or otherwise reacted to coronavirus misinformation that Facebook has flagged and removed as "harmful" will be directed to a website debunking coronavirus myths from the World Health Organization.

The announcement came in a blog post written by Guy Rosen, Facebook's vice president of integrity.

"We want to connect people who may have interacted with harmful misinformation about the virus with the truth from authoritative sources in case they see or hear these claims again off of Facebook," Rosen wrote.

The new alert will not identify the specific post containing harmful misinformation, according to a Facebook spokesperson, who said the company was relying on research that shows that repeated exposure — even in fact checks — can sometimes reinforce misinformed beliefs.

The move is the most recent step in an aggressive and coordinated response by Facebook and other tech companies to promote facts and guidance from reputable sources about the spread of the coronavirus and to combat the glut of false information, which the WHO warned in February had become a "massive infodemic."

Full coverage of the coronavirus outbreak

But Facebook and other tech platforms are still struggling to limit the spread of coronavirus-related misinformation.

In March, Facebook issued warnings on about 40 million posts deemed false by Facebook's fact-checking partners, according to Rosen, adding that users who saw the warning labels rarely clicked through to the original content. Facebook has removed hundreds of thousands of posts with misinformation that "could lead to imminent physical harm," Rosen said.

Facebook CEO Mark Zuckerberg also posted about the move.

Since January, Facebook, YouTube and Twitter have made sweeping moves to promote reliable information, pushing pop-ups and features directing users to websites of trusted health organizations, like the WHO. Facebook has also provided the WHO and other health organizations fighting the coronavirus with free ads and continues to ban ads that promote fake coronavirus cures. Last month, Facebook unveiled a coronavirus information center, a home for information from local leaders and public health organizations.

Through these efforts and others, Facebook and Instagram have directed more than 2 billion people to reliable health resources from the WHO and other organizations, Rosen wrote, with more than 350 million users clicking through to the sites.

At the same time, Facebook, Twitter and YouTube have sent thousands of moderators home, making the platforms more reliant on automated content moderation tools and systems. Ads for banned items like medical masks continue to slip through, and coronavirus misinformation continues to spread in groups, a feature providing mostly private spaces, some of which are predicated on conspiracy theories and extreme ideologies, where misinformation and rumors are shared widely and moderated poorly.

Disinformation researchers noted Facebook's successes but said they were still not enough to meet the challenge.

"Could a Facebook post kill Grandma? It's more likely than ever with the lack of curation of COVID-19 information," said Joan Donovan, the director of Harvard University's Technology and Social Change Research Project. "Facebook has come a long way to change some of their product's design by implementing fact-checking, but we are seeing manipulators evolve much quicker. Without a plan for robust curation and strategic amplification, we will continue to suffer the ill effects of health misinformation."

Facebook's announcement came the same day that Avaaz, an activist organization that campaigns against disinformation online, released a report that found that millions of users continue to see and interact with coronavirus misinformation on Facebook, despite the company's efforts to stop its spread.

Avaaz's study looked at 100 posts in six languages that third-party fact-checkers had deemed to be misinformation, content that garnered 117 million views and more than 1.7 million shares. The report called the small sample "the tip of the iceberg."

In a statement, a Facebook spokesperson said Avaaz's sample was "not representative of the community on Facebook and their findings don't reflect the work we've done."

Download the NBC News app for full coverage of the coronavirus outbreak

Avaaz's report also found that Facebook could be slow to act on posts containing misinformation, which could then give the posts time to go viral. Facebook sometimes took days or weeks to issue warning labels, the report said, and it issued them unequally across languages, favoring English-language posts.

Fadi Quran, the anti-disinformation campaign director at Avaaz, said he is encouraged by the steps announced Thursday and said the efforts show how far the company has come in the past five years.

"It was important for Facebook to be honest about the seriousness at this moment, to realize people's lives could be at risk and to send retroactive alerts to users who may have encountered misinformation," Quran said.