IE 11 is not supported. For an optimal experience visit our site on another browser.

Facebook ignored racial bias research, employees say

Researchers said the company ignored their work and then stopped them from pursuing topics related to bias altogether.
Mark Zuckerberg; Sheryl Sandberg.
Mark Zuckerberg; Sheryl Sandberg.Getty Images

In mid-2019, researchers at Facebook began studying a new set of rules proposed for the automated system that Instagram uses to remove accounts for bullying and other infractions.

What they found was alarming. Users on the Facebook-owned Instagram in the United States whose activity on the app suggested they were Black were about 50 percent more likely under the new rules to have their accounts automatically disabled by the moderation system than those whose activity indicated they were white, according to two current employees and one former employee, who all spoke on the condition of anonymity because they weren’t authorized to talk to the media.

The findings were echoed by interviews with Facebook and Instagram users who said they felt that the platforms’ moderation practices were discriminatory, the employees said.

The researchers took their findings to their superiors, expecting that it would prompt managers to quash the changes. Instead, they were told not share their findings with co-workers or conduct any further research into racial bias in Instagram’s automated account removal system. Instagram ended up implementing a slightly different version of the new rules but declined to let the researchers test the new version.

It was an episode that frustrated employees who wanted to reduce racial bias on the platform but one that they said did not surprise them. Facebook management has repeatedly ignored and suppressed internal research showing racial bias in the way that the platform removes content, according to eight current and former employees, all of whom requested anonymity to discuss internal Facebook business.

The lack of action on this issue from the management has contributed to a growing sense among some Facebook employees that a small inner circle of senior executives — including Chief Executive Mark Zuckerberg, Chief Operating Officer Sheryl Sandberg, Nick Clegg, vice president of global affairs and communications, and Joel Kaplan, vice president of global public policy — are making decisions that run counter to the recommendations of subject matter experts and researchers below them, particularly around hate speech, violence and racial bias, the employees said.

Facebook did not deny that some researchers were told to stop exploring racial bias but said that it was because the methodology used was flawed.

Alex Schultz, Facebook's vice president of growth and analytics, said research and analyses on race are important to Facebook but is a “very charged topic” and so needs to be done in a rigorous, standardized way across the company.

“There will be people who are upset with the speed we are taking action,” he said, adding that “we’ve massively increased our investment” in understanding hate speech and algorithmic bias.

“We are actively investigating how to measure and analyze internet products along race and ethnic lines responsibly and in partnership with other companies,” Facebook spokeswoman Carolyn Glanville added, noting that the company established a team of experts last year, called Responsible AI, focused on “understanding fairness and inclusion concerns” related to the deployment of artificial intelligence in Facebook products.

Reporting and moderation

One key source of tension for Facebook comes from the way its automated system moderates hate speech.

Facebook has policies prohibiting hate speech that attacks people based on “protected characteristics” including race, ethnicity, religion, gender or sexual orientation. It relies on user reports and automated content moderation tools to identify and remove this speech.

In an effort to be neutral, the company’s hate speech policies treat attacks on white people or men in exactly the same way as it treats comments about Black people or women, an approach that employees said does not take into account the historical context of racism and oppression.

“The world treats Black people differently from white people,” one employee said. “If we are treating everyone the same way, we are already making choices on the wrong side of history.”

Employees said that this policy means the company’s automated content moderation tools proactively detect far more hate speech targeting white people than it does hate speech targeting Black people, even if the hate speech targeted at Black people is widely considered more offensive -- a hypothesis supported by academics and the company’s own internal research..

The company has conducted internal research that showed that Facebook users in the United States from both sides of the political spectrum find attacks against traditionally marginalized groups including Black and Hispanic people to be more upsetting than attacks against groups that have not traditionally been marginalized including men and white people — even when the same type of language is used. So “white people are trash” is generally considered less offensive than “Black people are scum,” but Facebook’s policies treat them the same. Data presented at Facebook policy meetings, including one attended by Vanity Fair in fall 2018, shows that users are more upset by attacks against women than they are by attacks against men.

This inequity is reflected in the levels of hate speech that is reported versus taken down automatically. According to a chart posted internally in July 2019 and leaked to NBC News, Facebook proactively took down a higher proportion of hate speech against white people than was reported by users, indicating that users didn’t find it offensive enough to report but Facebook deleted it anyway. In contrast, the same tools took down a lower proportion of hate speech targeting marginalized groups including Black, Jewish and transgender users than was reported by users, indicating that these attacks were considered to be offensive but Facebook’s automated tools weren't detecting them.

The employee who posted the chart to Workplace, the internal version of Facebook, said that the findings showed that Facebook’s proactive tools “disproportionately defend white men.”

Facebook spokeswoman Ruchika Budhraja said Wednesday that the company has since early 2018 considered treating different groups differently, as reported by Vanity Fair in February 2019, but that it is “very difficult to parse out who is privileged and who is marginalized globally” and so the company has not changed its policies.

Roadblocks and hurdles

The episodes detailed by the current and former employees add to growing scrutiny from both Facebook critics and the company's own workers over how seriously it takes allegations of racial bias on its platforms. The company is in the midst of a major advertiser boycott that was sparked in part by social justice groups that believe it has not done enough to protect users from discrimination.

Meanwhile, the social media giant is also under fire from Republican politicians who say it has a liberal tilt and unfairly censors conservative voices.

Zuckerberg has tried to maintain that the platform is politically neutral and an advocate for free speech while also declining to make major changes. On Tuesday, The Wall Street Journal reported that Facebook was creating new teams to study racial bias on its platforms.

Facebook employees who were already working on that topic say that for years the company disregarded their work and often instructed them to stop their research.

At around the same time as the Instagram episode, several pieces of research exploring race and racial bias on Facebook and Instagram were summarized and presented in a document to Zuckerberg and his inner circle, known as the M-Team.

The team responded by instructing employees to stop all research on race and ethnicity and not to share any of their findings with others in the company, according to two current and one ex-employee.

Schultz, who said he was part of the M-Team at the time, did not recall the specific communication, but said that some research was stopped over ethics and methodology concerns.

Other attempts to study racial and social bias or oppression on the platform were stopped at the internal research review process, two current and three former employees said. Researchers, many of whom have conducted academic research of societal biases, were told they were not allowed to ask users questions about their racial identity, according to four sources.

Without permission to ask users questions about racial identity, researchers — including those who conducted the Instagram study — relied on a proxy for race called “multicultural affinity,” which categorized users for advertising purposes based on their behavior according to their “affinity” for African American, Asian American or Hispanic people in the United States.

While the current and former employees acknowledged this is not a perfect proxy for race, they had few other options for attempting to understand racial bias on the platform. They were also frustrated by the idea that the company was comfortable in delineating users on the basis of “ethnic affinity” for advertising purposes but not for research purposes.

“Leadership wanted a standard and consistent approach to avoid biased, incorrect and irresponsible work and are proud we set up a project to do that,” Glanville said.

After a ProPublica investigation in 2016, Facebook prohibited advertisers from targeting housing, employment and credit ads based on what was then called “ethnic affinity” to “prevent potential discrimination through ads,” Glanville said.

It’s not the first time Facebook executives have been accused of ignoring internal research highlighting problems on the platform. A team at Facebook delivered a presentation in 2018 showing that Facebook’s algorithms were driving people apart by showing users increasingly divisive content, The Wall Street Journal reported in May, but senior executives including Zuckerberg shelved the research.

Facebook responded to the article by pointing to its "research to understand our platform’s impact on society so we continue to improve.”

'This is not a new issue'

Whether Facebook takes that research into account has become a subject for discussion on the company's internal message boards. One engineer shared data showing the slant of Facebook's moderation practices and offered a pointed criticism of the company.

“This is not a new issue. This has been going on for years. People smarter, harder working, more patient, and more professional than I have fought to address it, only to be shut down by myopic focus on bad metrics,” the engineer wrote, according to screenshots of a post on the company’s internal message boards shared with NBC News. “I’ve seen people be driven insane as leadership ignores them or outright shuts them down and commits us, again and again, to doubling down on this same path.”

The engineer quit the company the same day, according to the leaked post and to two current employees, over the leadership’s lack of action on this issue and accused Zuckerberg of making misleading statements about the company’s handling of hate speech.

“For months I’ve wrestled with myself, tried to convince myself that the work was definitely good, that it was all worth it — but I can’t,” he wrote. “The issues are too glaring, the failures of leadership too grievous. I’ve lost too much sleep wondering how many people having an awful day I’ve hurt just a little bit more by silencing their opportunity to vent with their friends, or how many other tiny injustices I’ve inflicted in the course of following orders.”

The engineer did not respond to several requests for comment.

“The mere fact we have this research, and continue to find the best way to conduct this and similar research, is because we are trying to understand,” Facebook said.

These revelations echo the findings of an external civil rights audit that Facebook commissioned in 2018 and was published this month. The audit report revealed that the company has not done enough to protect users from discrimination, falsehoods and incitement to violence.

“I don’t think they understand civil rights,” said NAACP president Derrick Johnson, who was among the civil rights leaders to meet with Zuckerberg and Sandberg in the week the audit was released. “They have a blind spot to the needs to protect people and unfortunately, far too often, they conflate issues related to civil rights with partisanship. Defeating hate isn’t a partisan question.”