IE 11 is not supported. For an optimal experience visit our site on another browser.

Facebook reveals how much abusive content it removes

Facebook deleted 865.8 million posts during the first quarter of 2018
Image: Facebook HQ
Facebook headquarters in Menlo Park, California.Justin Sullivan / Getty Images

SAN FRANCISCO — Facebook revealed the scope of abuse on the site in a transparency report released Tuesday that details how many and what types of offending posts the social network’s moderators have removed.

While the report doesn’t tell the entire story, it’s the first time the company has shared data on how it handles abuse.

During the first quarter of 2018, Facebook deleted 865.8 million posts, the majority of which were spam, according to the report. Facebook also removed 28.8 million posts showing everything from nudity that violated its community standards to graphic violence and terrorist propaganda, the report said.

The social network also removed 583 million fake accounts during the first three months of the year, a decrease from 694 million the previous quarter. While the number of fake accounts being detected and removed has decreased, the problem is still a massive one for Facebook.

The number of accounts it removed last quarter is equivalent to 4 percent of its user base. To put it another way, last quarter Facebook removed two fake accounts per every person living in the United States.

Facebook also said on Monday it has removed 200 apps as part of its investigation into data misuse.

Detecting hate speech seems to be one of Facebook’s biggest challenges, according to the report, since there can be lingual nuances that artificial intelligence can not yet detect. Facebook removed 2.5 million pieces last quarter, many of which had to be checked by a human review team, according to a post from Guy Rosen, Facebook’s vice president of product management.

However, it’s likely that plenty more remained on the site, and that’s a problem Facebook knows it needs to fix.

Facebook CEO Mark Zuckerberg said the company still has “a lot more work to do” when it comes to removing hate speech and that advancements in artificial intelligence are needed so computers can better understand what may be hate speech in every language.

Sometimes, hate speech on Facebook can have an affect on a user’s physical safety. In conflict-ridden Myanmar, anti-Rohingya Muslim hate speech spread through Facebook’s Messenger app last year, putting thousands of people at risk, according to several human rights organizations, which blasted Facebook’s response as inadequate and said they had to repeatedly flag the issue to the company.

Facebook’s AI has succeeded in flagging nearly 100 percent of spam posts, 96 percent showing nudity and 86 percent showing graphic violence, but the company said it needs to get better.

Alex Schultz, Facebook’s vice president of analytics, said it’s inevitable that “people will always try to post bad things on Facebook.” However, he said improving artificial intelligence is crucial to the company’s quest to get “bad content off Facebook before it’s even reported.”

“Improving this rate over time is critical because it’s about directly reducing the negative impact bad content has on people who use Facebook,” he said in a blog post.

The transparency report comes as Facebook tries to correct course after a massive data harvesting scandal shed light on what data the company collects on users, what it does with it and who has access to that information.

Facebook said it expects to release updates on the data every quarter so the community can see the progress it is making.

Zuckerberg and his team have been dealing with the fallout from unknowingly allowing Russia to weaponize its platform to spread fake news in the run-up to the 2016 presidential election. The new report did not focus on Facebook’s efforts to thwart the spread of fake news.