Facebook said on Tuesday it removed 7 million posts in the second quarter for sharing false information about the novel coronavirus, including content that promoted fake preventative measures and exaggerated cures.
Facebook released the data as part of its sixth Community Standards Enforcement Report, which it introduced in 2018 along with more stringent decorum rules in response to a backlash over its lax approach to policing content on its platforms.
The company said it would invite external experts to independently audit the metrics used in the report, beginning 2021.
The world’s biggest social media company removed about 22.5 million posts containing hate speech on its flagship app in the second quarter, up from 9.6 million in the first quarter. It also deleted 8.7 million posts connected to extremist organizations, compared with 6.3 million in the prior period.
Facebook said it relied more heavily on automation technology for reviewing content during the months of April, May and June as it had fewer reviewers at its offices due to the COVID-19 pandemic.
That resulted in company taking action on fewer pieces of content related to suicide and self-injury, child nudity and sexual exploitation on its platforms, Facebook said in a blog post.
The company said it was expanding its hate speech policy to include “content depicting blackface, or stereotypes about Jewish people controlling the world.”
Some U.S. politicians and public figures have caused controversies by donning blackface, a practice that dates back to 19th century minstrel shows that caricatured slaves. It has long been used to demean African-Americans.