IE 11 is not supported. For an optimal experience visit our site on another browser.

In NYC, companies will have to prove their AI hiring software isn't sexist or racist

AI-infused hiring programs have drawn scrutiny, most notably over whether they end up exhibiting biases based on the data they’re trained on.
Get more newsLiveon

New York City businesses that use artificial intelligence to help find hires now have to show the process was free from sexism and racism.

A new law, which takes effect Wednesday, is believed to be the first of its kind in the world. Under New York’s new rule, hiring software that relies on machine learning or artificial intelligence to help employers choose preferred candidates or weed out bad ones — called an automatic employment decision tool, or AEDT — must pass an audit by a third-party company to show it’s free of racist or sexist bias. 

Companies that run AI hiring software must also publish those results. Businesses that use third-party AEDT software can no longer legally use such programs if they haven’t been audited.

Companies are increasingly using automated tools in their hiring processes. Cathy O’Neil, the CEO of Orcaa, a consulting firm that has been running audits of hiring tools for companies that want to be in good standing with New York’s new law, said the rise in tools that automatically judge job candidates has become necessary because job seekers are also using tools that send out huge numbers of applications.

Commuters disembark from a Metro-North train, in New York City
Commuters disembark from a Metro-North train, in New York City, on May 25, 2021.Alexi Rosenfeld / Getty Images file

“In the age of the internet, it’s a lot easier to apply for a job. And there are tools for candidates to streamline that process. Like ‘give us your resume and we will apply to 400 jobs,’” O’Neil said. “They get just too many applications. They have to cull the list somehow, so these algorithms do that for them.”

AI-infused hiring programs have drawn scrutiny, most notably over whether they end up exhibiting biases based on the data they’re trained on. Studies have long found that programs that use machine learning or artificial intelligence often exhibit racism, sexism and other biases

As flashy generative AI applications like ChatGPT and Midjourney have surged in popularity, federal lawmakers and even many tech company executives have repeatedly called for regulation. But so far, there’s little sense from Congress what that might look like. 

Experts say that while the New York law is important for workers, it’s still very limited.

Julia Stoyanovich, a computer science professor at New York University and a founding member of the city’s Automatic Decisions Systems Task Force, said it’s an important start but still very limited.

“First of all, I’m really glad the law is on the books, that there are rules now and we’re going to start enforcing them,” Stoyanovich said.

“But there are also lots of gaps. So, for example, the bias audit is very limited in terms of categories. We don’t look at age-based discrimination, for example, which in hiring is a huge deal, or disabilities,” she added.

It’s also not clear how the law will be enforced or to what extent.

New York’s Department of Consumer and Worker Protection, charged with enforcing the law, “will collect and investigate complaints” of companies accused of violating it, an agency spokesperson said.

Jake Metcalf, a researcher specializing in AI for Data & Society, a nonprofit group that studies the effects of technology on society, said the wording of the law — it defines AEDT as technology that will “substantially assist or replace discretionary decision making” — has led lawyers that advise large companies not to take it seriously.

“There are quite a few employment law firms in New York that are advising their clients that they don’t have to comply, given the letter of the law, even though the spirit of the law would seem to apply to them,” Metcalf said. 

“It’s very hard to figure out what ‘substantially’ means,” he said.

Even if the audits help reduce some bias against job candidates, it’s still not clear that AI job screening is particularly good at what it seeks to do.

“I don’t think they’re improving the overall system for the people who are in the system, either for candidates or for hiring managers,” said O’Neil of Orcaa. “It’s just an information overload and not much reason to trust these tools to make things better.”

Stoyanovich went further, saying some automated hiring tools she has studied simply don’t work.

“One of the things that this law does not protect us against are these just nonsensical, bull--- screening methods. I can tell you about some of those that I’ve audited, together with a team of collaborators, where it’s just nonsensical entirely. So it’s not going to be biased, because it’s just random, as far as I can tell. So it’s going to be equally as nonsensical for men and women and Blacks and whites,” Stoyanovich said.