The Federal Communications Commission is moving to explicitly criminalize unsolicited robocalls that use voices made with artificial intelligence, the agency said Wednesday.
The proposal would outlaw such robocalls under the Telephone Consumer Protection Act, or TCPA, a 1991 law that regulates automated political and marketing calls made without the receivers’ consent.
The TCPA has been used in several high-profile prosecutions for illegal robocalls. Last year the FCC imposed a $5 million penalty against conservative activists who arranged for Black voters to receive calls falsely telling them that voting could expose them to debt collectors and police departments in 2020. The FCC imposed a $300 million fine for a company that spammed phones with auto warranty ads.
The five-member commission is expected to vote on and pass the change in the coming weeks, an FCC spokesperson said.
The change will particularly empower state attorneys general to take legal action against spammers who use AI, the spokesperson said. New Hampshire’s attorney general’s office has announced an investigation into the fake Biden call.
“AI-generated voice cloning and images are already sowing confusion by tricking consumers into thinking scams and frauds are legitimate,” FCC Chairwoman Jessica Rosenworcel said in an emailed statement.
“No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls,” she said.
Kathy Stokes, the director of fraud prevention programs at AARP, formerly the American Association of Retired Persons, welcomed the FCC’s move, saying AI can be used to supercharge scams targeting seniors.
“We’ve deprioritized fraud as a crime in this country, which comes from us immediately having a knee-jerk reaction of blaming the victim for not knowing something,” Stokes said.
“We cannot educate our way out of this,” she said.