America's largest maker of police body cameras said Thursday it had formed an ethics board to guide its work on artificial intelligence, prompting a rebuke from civil rights organizations who urged the company not to develop devices that are able to identify people by their faces.
The announcement by Axon ─ formerly known as Taser, for its line of stun guns ─ marks a new development in an ongoing battle over the uses, and feared abuses, of body cameras. The devices have exploded in popularity in recent years, as high-profile officer killings of black men ignited calls for greater police transparency. The cameras have also raised privacy concerns, as they allow police to capture what people do in public and behind closed doors.
Facial recognition is already used in many aspects of modern life, from tagging functions on Facebook and unlocking iPhones to identification checks at some airports. Some big American police departments use facial recognition software to scan video footage of crimes and identify suspects. Most Americans' driver's license photos, passport photos and mugshots are converted by the government into a format that allows them to be scanned by facial recognition systems, researchers say.
But the leap to real-time facial recognition in body cameras is still in its infancy. China says it has equipped police with body-worn cameras with facial recognition capabilities. Such technology has not been deployed in the United States, although researchers say the possibility of that happening isn't too far off.
That has triggered a furious debate about how such cameras would be used, and the risks of using them to expand public surveillance.
Axon is the body camera industry leader in the United States, and last year acquired two artificial intelligence companies, fueling speculation that it was seeking to develop police body cameras with facial recognition. Although Axon CEO Rick Smith has talked in the past about using facial recognition technology to search faces in crowds for wanted criminals, the company says it is not currently developing products that would do that. Its current AI-related work is focused on helping police departments sift through footage and automatically blur images like victims' faces on officers' dashboard computers before the footage is released to the public. The company also says it is using artificial intelligence to automate report writing.
Thursday's statement doesn't mention facial recognition, although a company spokesman said it is on the agenda for the board's first meeting.
"We'd rather get it right than be first to market for any technology," said Michael Wagers, an Axon vice president and chair of the ethics board. "That's certainly true for facial recognition."
The company said in the statement that it would rely on the new ethics board ─ comprised of experts in artificial intelligence, computer science, privacy law, civil liberties and policing ─ to "discuss ethical implications of AI-powered technologies" under development.
"We believe the advancement of AI technology will empower police officers to connect with their communities versus being stuck in front of a computer screen doing data entry," Smith said in the statement. "We also believe AI research and technology for use in law enforcement must be done ethically and with the public in mind."
The company's AI director, Moji Solgi, promised Axon would be "transparent with our customers and the general public" about its commitment to individual rights and privacy, starting with the release of annual reports outlining the ethics board's work.
But a number of civil rights organizations are not satisfied. Forty-two groups ─ including the NAACP, the American Civil Liberties Union, the National Urban League and the Electronic Frontier Foundation ─ joined in an open letter to the ethics board saying the company should reject as "categorically unethical" any products that allow "real-time face recognition analysis of live video captured by body-worn cameras."
Such devices, the civil rights groups said, could be used to chill freedom of speech at political protests, and could misidentify innocent people as criminal suspects.
Researchers have said that facial recognition algorithms often fail to identify, or misidentify, people of certain racial or ethnic groups. The civil rights organizations cited a recent MIT study that found algorithms created by Microsoft, IBM and China-based Face++ had high rates of misclassifications of dark-skinned women.
Harlan Yu, executive director of Upturn, which monitors police agencies' body camera policies and is one of the letter's signatories, said the groups felt it important to "draw a bright ethical line" around real-time facial recognition body cameras. He also said the groups were troubled by the lack of representation on the ethics board of communities, particularly those made up of racial minorities, that are subject to intense police scrutiny.
"Just because real-time face recognition might be technologically feasible to do doesn’t mean they should," Yu said.
Yu said he declined an invitation to join the Axon ethics board in part because it required signing a non-disclosure agreement.
Wagers said the non-disclosure agreements covered the company's intellectual property but did not prevent board members from talking publicly about the issues they discuss.
He added that Axon would work to expand the ethics board so that it reflects critics' concerns.