IE 11 is not supported. For an optimal experience visit our site on another browser.

Timnit Gebru is part of a wave of Black women working to change AI

Gebru said she was pushed out of her job at Google after pointing out how AI has been harmful to people of color. Now she's launching her own research institute.
Timnit Gebru
Timnit Gebru speaks at TechCrunch Disrupt in San Francisco in 2018.Kimberly White / Getty Images for TechCrunch

A computer scientist who said she was pushed out of her job at Google in December 2020 has marked the one-year anniversary of her ouster with a new research institute aiming to support the creation of ethical artificial intelligence. 

Timnit Gebru, a known advocate for diversity in AI, announced the launch of the Distributed Artificial Intelligence Research Institute, or DAIR. Its website describes it as “a space for independent, community-rooted AI research free from Big Tech’s pervasive influence.”

Part of how Gebru imagines creating such research is by moving away from the Silicon Valley ethos of “move fast and break things” — which was Facebook’s internal motto, coined by Mark Zuckerberg, until 2014 — to instead take a more deliberate approach to creating new technologies that serve marginalized communities. That includes recognizing and mitigating technologies’ potentials for harm from the beginning of their creation process, rather than after they’ve already caused damage to those communities, Gebru told NBC News.

“If those are our values, we can’t achieve them without slowing down and without putting in more resources per project that we’re working on,” she said. 

Gebru said she learned from a December 2020 email from her manager’s manager that she had apparently resigned from her high-profile position as a co-lead of Google’s ethical AI team.

Gebru said she never resigned, but was instead fired after requesting that executives explain why they demanded that Gebru retract a paper she co-authored. It was about how large language models — or AI trained on large amounts of text data, a version of which underpins Google’s own search engine — could reinforce racism, sexism and other systems of oppression. 

Google’s head of research, Jeff Dean, said in a company email the paper “didn’t meet our bar for publication,” though others within the company cast doubt on that claim.

Prior to her departure from Google, Gebru also emailed her colleagues informing them of the retraction request and detailing her frustrations with what she characterized as the company’s subpar efforts to create a more diverse and inclusive workplace. 

The news of the alleged firing made headlines in the tech world and beyond, and it mobilized thousands of Google employees to join a solidarity campaign in support of Gebru, who is also the co-founder of Black in AI. At least two engineers resigned in protest of Gebru’s ousting. Google declined to comment for this story.

A year later, DAIR has found financial support from major backers. The MacArthur Foundation, the Ford Foundation, Open Society Foundations, The Rockefeller Foundation and the Kapor Center have provided a cumulative $3.7 million in grants, Gebru said.

She plans to publish DAIR’s research findings in both academic journals and alternative platforms and at a slower pace than the traditional timelines of both the tech industry and academia, she said. Researchers will be encouraged to disseminate their findings in forms that are accessible to the public, including websites and different forms of data visualization, Gebru said, adding that use of some DAIR data sets may require approval to maintain the institute’s mission of encouraging ethical applications of AI. 

Of how she thinks about the relationship between future DAIR research and the actions of large tech companies like Microsoft or Google, Gebru said, “DAIR isn’t doing research for these companies but in the public interest.” 

DAIR researchers will be recruited from, and embedded in, communities around the world, rather than being expected to originate from, or converge in, U.S. tech hubs, she added. DAIR’s first fellow, Raesetje Sefala, is based in Johannesburg, where she has been conducting research on the legacy of spatial apartheid, by creating the first publicly available data set of townships, or underdeveloped urban areas where Black South Africans were segregated through the end of apartheid in the 1990s. 

Gebru and Sefala plan to continue to expand the research and hope to partner with policymakers to “help us advocate for policies that desegregate neighborhoods,” the paper noted.

Coming from the community she studied, Sefala said, was crucial to the project’s success.

“Just having that knowledge and experience of coming from a township, firstly I was able to better coordinate how to label those neighborhoods, and when the models were getting it wrong, it was very easy for me to go in and see why,” she said. “If you don’t know anything about townships and you just have this data set, I think it would’ve been very difficult for you to understand.”

The project, Gebru said, is one example of how AI research can be enriched by researchers’ diversity of perspectives and lived experiences. 

“There is no way I would have done this research on South Africa if it wasn’t for all my collaborators who are South African,” she said. “Their knowledge is just not something I can acquire myself.”

In founding DAIR, Gebru joins a wave of Black female researchers who have founded their own independent institutes dedicated to pioneering more ethical and accountability-driven applications of AI systems, including Yeshimabeit Milner, founder of Data for Black Lives; Ruha Benjamin, founder of the Ida B. Wells Just Data Lab at Princeton University; and Joy Buolamwini, founder of the Algorithmic Justice League

Buolamwini and Gebru co-authored an influential 2018 paper that showed that facial recognition technologies — used by Microsoft, IBM and the Chinese company Megvii — misclassified darker-skinned women at much higher rates than they did light-skinned men. Following the publication of that paper, IBM and Microsoft released statements acknowledging the research and announcing their commitments to improving the accuracy of their facial recognition technologies.

The information from that paper also led IBM, Microsoft and Amazon to stop offering its facial recognition technologies to police. 

In recent years, a growing body of research has found race-, gender- and ability-based biases embedded in algorithms used in policing, health care, hiring tools and remote testing technologies, among others.

Gebru, Buolamwini and others have attributed these biases to the underrepresentation of women of all races, and people of color of all genders, in the AI workforce.

A report published last year by the Stanford Institute for Human-Centered Artificial Intelligence found that women have accounted for just 18.3 percent of graduates of AI and computer science doctoral programs within the past 10 years and that Black and Latino 2019 graduates of AI doctoral programs accounted for just 2.4 percent and 3.2 percent of graduates overall, respectively. That report also cited a 2020 membership survey of 100 people by Queer in AI, which found that nonbinary people accounted for less than 10 percent of group members and that transgender women and men accounted for 5 percent and 2.5 percent of members, respectively. (The report did not measure the intersections of gender identity and race, nor did it measure the number of disabled people in AI.) 

A 2019 report published by the AI Now Institute at New York University found that women constitute 18 percent of authors at leading AI conferences and between 10 and 15 percent of AI research staff at Facebook and Google. Black and Hispanic workers constituted between 2.5 and 6 percent of workers at Google, Facebook and Microsoft, the report noted. 

Buolamwini said DAIR’s mission is critical in light of these inequities that are embedded into both the AI workforce and the technologies themselves. 

“DAIR’s focus on research that centers the lived experiences of the excoded — those impacted by algorithmic harms — is a necessary intervention in a tech ecosystem that so often excludles, exploits, and expunges the very people who can transform the industry from within and without,” Buolamwini said in an emailed statement.

“Dr. Timnit Gebru’s intellect is only outmatched by the depth of her compassion and the strength of her convictions in fighting for those whom society has relegated to the margins,” Buolamwini added, noting that she also hopes to see the organization receive more funding in the future.

Gebru does, too: She’s hopeful that DAIR’s alternative approach to conducting AI research will give rise to new incentive structures that reward and fund research based not on the speed at which it’s produced, but on the communities that it serves.

“A proactive approach means funding other visions of AI,” she said.

Follow NBCBLK on FacebookTwitter and Instagram.