IE 11 is not supported. For an optimal experience visit our site on another browser.

No quick fix: How OpenAI's DALL·E 2 illustrated the challenges of bias in AI

OpenAI released the second version of its DALL·E image generator in April to rave reviews, but efforts to address societal biases in its output have illustrated systemic underlying problems with AI systems.
Photo illustration of warped faces on scanned paper.
OpenAI's DALL·E 2 has become a hot topic among technologists who see its biases as illustrative of problems with AI technology.Chelsea Stahl / NBC News Illustration

An artificial intelligence program that has impressed the internet with its ability to generate original images from user prompts has also sparked concerns and criticism for what is now a familiar issue with AI: racial and gender bias. 

And while OpenAI, the company behind the program, called DALL·E 2, has sought to address the issues, the efforts have also come under scrutiny for what some technologists have claimed is a superficial way to fix systemic underlying problems with AI systems.

“This is not just a technical problem. This is a problem that involves the social sciences,” said Kai-Wei Chang, an associate professor at the UCLA Samueli School of Engineering who studies artificial intelligence. There will be a future in which systems better guard against certain biased notions, but as long as society has biases, AI will reflect that, Chang said.

OpenAI released the second version of its DALL·E image generator in April to rave reviews. The program asks users to enter a series of words relating to one another — for example: “an astronaut playing basketball with cats in space in a minimalist style.” And with spatial and object awareness, DALL·E creates four original images that are supposed to reflect the words, according to the website.

As with many AI programs, it did not take long for some users to start reporting what they saw as signs of biases. OpenAI used the example caption “a builder” that produced images featuring only men, while the caption “a flight attendant” produced only images of women. In anticipation of those biases, OpenAI published a “Risks and Limitations” document with the limited release of the program before allegations of bias came out, noting that “DALL·E 2 additionally inherits various biases from its training data, and its outputs sometimes reinforce societal stereotypes.” 

DALL·E 2 draws on another piece of AI technology created by OpenAI called GPT-3, a natural language processing program that draws on hundreds of billions of examples of language from books, Wikipedia and the open internet to create a system that can approximate human writing.

Last week, OpenAI announced that it was implementing new mitigation techniques that helped DALL·E generate more diverse and reflective images of the world’s population — and it claimed that internal users were 12 times more likely to say images included people of diverse backgrounds. 

The same day, Max Woolf, a data scientist at BuzzFeed who was one of a few thousand people granted access to test the updated DALL·E model, started a Twitter thread pointing out the updated technology was less accurate than before at creating images based on his written prompt.

Other Twitter users who tested DALL·E 2 replied to Woolf’s thread sharing the same issue — specifically regarding race and gender biases. They suspected OpenAI’s diversity solution was as simple as the AI’s appending gender- or race-identifying words to the user-written prompts without their knowledge to inorganically produce diverse sets of images.

“The way this rumored implementation works is it adds either male or female or Black, Asian or Caucasian to the prompt randomly,” Woolf said in a phone interview. 

OpenAI published a blog post last month addressing its attempt to fix biases by reweighting certain data; it did not mention anything about adding gender or race designators to prompts.

“We believe it’s important to address bias and safety at all levels of the system, which is why we pursue a range of approaches,” an OpenAI spokesperson said in an email. “We are researching further ways to correct for biases, including best practices for adjusting training data.” 

Concerns about bias in AI systems have grown in recent years as examples around automated hiring, health care and algorithmic moderation have been found to discriminate against various groups. The issue has sparked talk of government regulation. New York passed a law in December that banned the use of AI in screening job candidates unless the AI passed a “bias audit.”

A large part of the issue around AI bias comes from the data that trains AI models how to make the right decisions and produce the desired outputs. The extracted data often has built-in prejudices and stereotypes due to societal biases or human error, such as photo data sets that portray men as executives and women as assistants. 

AI companies, including OpenAI, then use data filters to prevent graphic, explicit or otherwise unwanted results and, in this case, images from appearing. When the training data is put through the data filter, what OpenAI calls “bias amplification” produces results more skewed or biased than the original training data.

That makes AI bias particularly difficult to fix after a model has been built.

“The only way to really fix it is to retrain the entire model on the biased data, and that would not be short term,” Woolf said. 

Chirag Shah, an associate professor in the Information School at the University of Washington, said that AI bias is a common issue and that the fix OpenAI appeared to have come up with did not resolve the underlying issues of its program.

“The common thread is that all of these systems are trying to learn from existing data,” Shah said. “They are superficially and, on the surface, fixing the problem without fixing the underlying issue.”

Jacob Metcalf, a researcher at Data & Society, a nonprofit research institute, said a step forward would be for companies to be open about how they create and train their AI systems.

“For me the problem is the transparency,” he said. “I think it’s great that DALL·E exists, but the only way these systems are going to be safe and fair is maximalist transparency about how they are governed.”