IE 11 is not supported. For an optimal experience visit our site on another browser.

Algorithms Learn From Us, and We Can Be Better Teachers

As they learn from real-world data, algorithms pick up sexist, racist, and otherwise damaging biases that have long plagued human culture. These scientists are finding the solution.
Image: Microsoft Tay
Microsoft's infamous Tay chatbot was taken down in less than a day after tweeting racist and sexist thoughts it learned from humans. NBC News

Until a few years ago, computers couldn’t tell the difference between images of dogs and ones of cats. Today, computer programs use machine-learning algorithms to study piles of data and learn about the world and its people.

Algorithms can tell a banker whether someone will pay back a loan. They can pick the right applicant out of thousands of resumes. They assist judges to determine a prisoner’s risk of returning to a life of crime. These algorithms can improve our decision-making by considering troves of data, and then reducing human error.

But we’re also learning that algorithms, like humans, can discriminate. As they learn from real-world data, algorithms pick up sexist, racist, and otherwise damaging biases that have long plagued human culture.

Related: Scientists are Teaching Computers to Predict the Future

The past year has been full of shocking examples: Google’s photo app classified images of black people as gorillas. Nikon’s smart camera thought Asian people were blinking. Microsoft’s AI powered Tay was an innocent chatbot designed to learn by conversing with people on Twitter — in less than a day it sent out such abhorrent tweets that it had to be taken down. Last year, a ProPublica investigation revealed that a software used to determine a defendant’s risk of recidivism is twice as likely to falsely flag black individuals.

Most of these incidents are unintentional. Machine-learning is a type of artificial intelligence that instructs, enables, and empowers algorithms to extract patterns found in data. And not all the patterns found are fair or correct.

“We are increasingly seeing machine-learning methods are used to study social processes and make recommendations about, for example, who to hire,” said Hanna Wallach, a senior researcher at Microsoft Research and adjunct associate professor at UMass Amherst. “In reality, these social processes themselves have a number of structural biases. We definitely don't live in a perfect world.”

Now that these issues have surfaced, computer scientists and ethicists are looking for ways to detect and fix algorithmic bias and prevent a future in which minorities continue to be disadvantaged by the spillover of prejudice into artificial intelligence. Their efforts may signify a shift of culture in computer science, a field historically focused on making technical tools without having to worry about their social consequences.

“Physics had its ethical moment with the Manhattan project: a very horrifying, crystal clear event,” says Suresh Venkatasubramanian, a computer scientist at the University of Utah. “I think computer science has had more of a slow drip of those moments.”

Teaching Machines Not to Be Sexist

Most parents are careful to watch their language in front of the kids. Systems equipped with artificial intelligence may require a similar treatment.

Some algorithms are designed to learn about the world by reading text, and in doing so pick up useful associations between the words. The algorithms learn, for example, that the word king is related to the word man, and likewise, queen is related to woman. But language can be biased. Two years ago, Adam Kalai and his colleagues at Microsoft Research noticed algorithms were learning associations to words that aren’t objectively true, but merely reflect historical biases in society. For example, an algorithm learned that a man is related to engineer, whereas a woman is related to homemaker.

Imagine such a system is tasked with finding the right resume for a programming job. If the applicant’s name is John, he’s more likely to be picked than someone named Mary.

“The good news is that once we detect the bug, we can fix this,” Kalai says. “We can remove these biases from the computer and it’s actually easier than trying to remove the same bias from a person.”

Teaching Fairness to an Algorithm

While most humans have an instinctive idea of what it means to be fair, it’s much harder to come up with a universal definition of fairness, turn that into a mathematical expression, and teach it to an algorithm.

"We can remove these biases from the computer and it's actually easier than trying to remove the same bias from a person."

But it is possible to find out when an algorithm is treating users unfairly based on their gender or race. Last year, a Carnegie Mellon research team tested Google ads for bias by simulating people searching online for jobs. The team discovered that high-income jobs were shown to men much more often that they were shown to women. These types of problems could be prevented if companies ran simulations internally to test their algorithms, the researchers said.

An international team from the United States, Switzerland, and Germany has developed a toolkit called FairTest, which helps developers check their applications for bias. It looks for unfair associations that the program might inadvertently form and can reveal, for example, biases against older people in a health application or offensive racial labeling in an image-tagging app.

Checking for fairness could eventually become a basic aspect of programing, says Samuel Drews, a graduate student at University of Wisconsin-Madison who works on a similar fairness-checking tool.

“Since the earlier days of computer programing, going back to Alan Turing, people were concerned with developing a proof that the code does what it’s supposed to do, that the code is formally correct,” Drews says. “So maybe thinking forward, one definition of correctness for algorithms should be that they are fair. That they don’t discriminate against people.”

Holding Algorithms Accountable

What if someone is refused a loan and suspects it had something to do with the area they live, which happens to include a lot of people with low credit scores?

They may never find out what happened as it’s extremely difficult to inquire about the inner-workings of an algorithm or legally challenge it, explains Solon Barocas, a researcher at Microsoft Research.

“It requires a significant amount of insight into how the model is developed, and often people who are discriminated against won't be in a position to learn those things,” Barocas says.

It doesn’t help that most algorithms are like a black box and that their details are closely guarded secrets. This, however, may change soon. The European Union has just introduced regulations that will require algorithms, such as those used by lending banks, to be able to explain their decision-making process, essentially giving people a “right to explanation.”

It’s also becoming easier to build algorithms that can explain what they do.

“It’s computationally hard to build something that’s transparent, so most people don’t even try,” says Cynthia Rudin, an associate professor of computer science at Duke University who works on publically available tools for building transparent algorithms. “But we are now at a stage where the computers are powerful enough so we shouldn't be too scared of those problems.”

Related: Self-Driving Cars Will Turn Intersections Into High-Speed Ballet

The next generation of computer scientists may be educated differently, says Venkatasubramanian, who sees a growing trend of courses about developing fair algorithms. “We are working on building educational modules that anyone in data mining and machine learning can use to understand issues of ethics and fairness."

All these efforts could lead to big reward. Algorithms have the potential to become better and fairer decision-makers than humans. We can’t expect a recruiter to look at a job applicant and completely ignore his or her gender. But we can carefully shield algorithms from learning the implicit biases that humans find hard to shake off.

Despite the challenges, the future looks promising, Venkatasubramanian says.

“What makes me excited is that just two years ago it was so shocking to see the problems but now we are working on it and have tools. We just have to figure out how to most effectively use them.”