IE 11 is not supported. For an optimal experience visit our site on another browser.

Senate hearing with deepfake experts tackles elections and sexual abuse

Deepfake detection software experts shared concerns about existing plans to curb malicious deepfakes.
Sen. Josh Hawley, R-Mo., attends a Senate hearing on March 15, 2023.
Sen. Josh Hawley, R-Mo., attends a Senate hearing on March 15, 2023.Tom Williams / CQ-Roll Call via AP file

Three deepfake-detection software experts testified to the Senate's subcommittee on privacy, technology and the law Tuesday about how Congress can regulate the artificial intelligence space, particularly the rise of malicious deepfakes.

Deepfakes are misleading audio, video or images that are created or edited with AI technology.

In January, deepfake audio that impersonated President Joe Biden telling Democrats not to vote in the New Hampshire primary was sent to thousands of voters in a robocall. Other deepfakes use AI to "undress" real photos of women and girls, juxtaposing their real faces with fake nudity. Both types of deepfakes were discussed during Tuesday's hearing, which focused on disinformation in the 2024 election cycle.

Sen. Richard Blumenthal, D-Conn., chairman of the committee, said Congress’ proposed regulatory framework would require independent testing of AI technology before it is released to the public, along with potential penalties.

“We should make no mistake: The threat of political deepfakes is real, it’s happening now,” he said. “It’s not science fiction coming at some point in the future, possibly or hypothetically. Artificial intelligence is already being used to interfere with our elections, sowing lies about candidates and suppressing the vote.”

Zohaib Ahmed, CEO and co-founder of Resemble AI; Ben Colman, CEO and co-founder of Reality Defender; and Rijul Gupta, CEO of DeepMedia, testified at the hearing.

Eleven states have passed laws banning deepfake election interference, said Sen. Amy Klobuchar, D-Minn. Other states have passed legislation to create civil litigation opportunities for victims of deepfake sexual abuse. But proposed federal legislation around both issues has stalled in the House and the Senate without votes, which Sen. Josh Hawley, R-Mo., urged both party leaders to schedule.

"The danger of this technology without guardrails or safety features is becoming painfully apparent," Hawley said. "Let’s not allow these same companies that control the social media technology in this country, that control the news in this country, to also now use AI to further their hold on this country and the political process."

In the case of the New Hampshire Biden robocall, NBC News reported that the voice-cloned audio was created by a street magician who had ties to a rival Democratic campaign. The audio was created with software from ElevenLabs, which anyone can access and use.

NBC News also reported that audio created with ElevenLabs contains a hidden watermark — numerous big tech companies have proposed watermarking technology as a solution to deepfakes — but in the process of creating the robocall, the watermark was stripped from the audio that primary voters received.

"Anybody with a Google search and internet connection can make anything as entertaining and dangerous as they can imagine," said Colman of Reality Defender. "We’ve seen time and time again they just aren’t going to follow the rules."