IE 11 is not supported. For an optimal experience visit our site on another browser.

This viral Schwarzenegger deepfake isn't just entertaining. It's a warning.

The emergence of fake videos have become a sort of test run and public service announcement, heightening public awareness of deepfake technology.
A still from a deepfake video in which Bill Hader slowly transforms into Arnold Schwarzenegger.
A still from a deepfake video in which Bill Hader slowly transforms into Arnold Schwarzenegger.ctrl shift face / via YouTube

The video starts like dozens of others on YouTube — with former “Saturday Night Live” star Bill Hader offering up a celebrity impression, this time of Arnold Schwarzenegger.

The impression is spot-on, but that’s not why the video has almost 6 million views in the last month. About ten seconds into the video, Hader’s face slowly, almost imperceptibly starts to morph into Schwarzenegger’s face. The full transformation takes about six seconds, but the changes are so subtle that it seems like magic. Suddenly, it looks like Schwarzenegger, albeit a skinnier version, is doing an impression of himself.

The video quickly became one of YouTube’s most watched deepfake videos, a burgeoning genre of content on the internet that uses powerful — and often free — software to create extremely lifelike videos of people saying just about anything.

The emergence of these videos has led to growing concern that they could be used to spread a new, powerful form of misinformation ahead of the 2020 elections. Videos of politicians could be easily manipulated to portray them as saying things they never really said. But the Hader-Schwarzenegger video, along with other celebrity-based videos, have become a sort of test run and public service announcement, heightening public awareness of deepfake technology and also making sure some deepfakes have trouble getting monetized.

“With the Bill Hader video, half of the people who comment don’t know it’s modified,” said Tom, a graphic illustrator from the Czech Republic who created the video, and who asked that NBC News not use his last name out of privacy concerns. Tom was the first to post the video to Reddit and YouTube and sent over examples of his data set and process for making deepfakes.

“We need for people to know what’s possible, and to think before they believe,” he said.

On Thursday, the House Intelligence Committee will hold a hearing to examine what it calls “the national security threats posed by AI-enabled fake content, what can be done to detect and combat it, and what role the public sector, the private sector, and society as a whole should play to counter a potentially grim, ‘post-truth’ future.”

Less-convincing manipulated videos have taken over the news in recent weeks, notably one that slowed down Nancy Pelosi’s voice to falsely portray her as having trouble speaking. On Tuesday, a deepfake of Facebook CEO Mark Zuckerberg that was posted to Instagram gained attention as part of an art project.

Tom made the Hader-Schwarzenegger video last month, in part to learn more about machine learning and artificial intelligence. His job involves doing 3D scanning for movies and video games, which he says “may sound like the same thing, but are not the same at all.”

The Schwarzenegger video took Tom a couple of days to create and didn’t cost him a penny. He uses free, open-source software called DeepFaceLab and learned how to meld faces using tutorials online.

“This one was actually the easiest to do,” he said. “It’s an interview. There are two camera angles. They’re using the same light. It was easier.”

Despite a rocky start that prominently featured illicit use of deepfakes, the communities supporting the videos and how to make them have flourished in the last year on YouTube and Reddit.

Users on Reddit’s deepfake forum used the community to post and solicit requests for fake pornography, with celebrity and even private citizens’ faces superimposed without permission into graphic sex scenes. Reddit swiftly banned the community in February of last year because of the requests. A replacement community called GIFFakes, which bans deepfakes used in pornography, is now thriving instead, and Tom posts his latest videos there.

Still, researchers say nefarious applications of deepfake technology pose a significant threat to both democracy and the day-to-day lives of average citizens targeted by fake revenge porn.

Danielle Citron, a law professor at the University of Maryland and author of “Hate Crimes in Cyberspace,” is scheduled to testify before the House Intel Committee’s deepfake panel to talk about potential ways — including legislation — to stop deepfakes that could affect elections, personal lives and businesses.

“Deepfakes can cause real, concrete harm. Whether that’s a deepfake sex video, or a fake porn video targeting political enemies, or a well-timed deepfake, maybe used to cause harm to an IPO,” Citron said. “And in unrest, if you time it just right, you can incite violence.”

A deepfake video of Gabon’s President Ali Bongo deepened tensions in the African nation last year, and was released by the government one week before an unsuccessful coup.

Citron said there has “definitely been thinking going on” among researchers and lawmakers, who “could craft a narrow enough statue, a provision that has to do with elections and disclosure law” that could limit the spread of manipulated videos.

Citron admitted, however, that legislation might not be able to provide a full solution to deepfakes created and distributed from overseas.

“There’s no recourse with those kinds of bad actors. The law is really limited in a whole number of spaces,” said Citron. “There’s a lot of hurdles here. I love the law. I’m a law professor. But we have to be modest.”

Tom said he is aware he could be one of those foreign individuals with a certain set of skills prized by someone looking to do a lot of political harm. That’s why he said he’s sworn off creating political deepfakes or working for someone who wants them.

That doesn’t mean he hasn’t received offers. He said someone in China reached out to see if he could edit a TV series and superimpose faces. (He turned them down, saying the task was too complicated.)

While the software to make deepfakes is not hard to acquire, Tom said there are other barriers for creation of fake videos, including the need for powerful computer rigs and a deep dataset of pictures for each celebrity, shot from all angles.

“It’s a good thing that not everyone can do it,” Tom said. “People on the internet are animals, and they might use it for not very good stuff.”

In the meantime, Tom has been happily surprised with the millions of views his YouTube channel has accrued over the last month.

But he can’t make any money off of his Schwarzennegger video. YouTube’s copyright algorithm was still able to detect that his video was taken from Conan O’Brien’s show, despite the subtle alterations to Hader’s face, and didn’t allow Tom to make ad money off of it.

That’s why Tom says he’s hopeful deepfakes can be used more for art and less for political disruption and revenge porn, because the people and machines who are making deepfakes are now focused on detecting them.

The creator of the YouTube channel that helped Tom learn how to make deepfakes, a computer graphics and algorithms professor named Károly Zsolnai-Fehér, is particularly focused on it. He has already made a video on his channel, Two Minute Papers, talking up an AI that can detect deepfakes by itself.

But Tom added that he hopes deepfakes aren’t entirely banned.

“When you take photography, Photoshop existed for over a decade,” he said. “We didn’t ban Photoshop because you can do malicious stuff with it. It’s mostly used positively. What’s important is that people are more cautious, just like with some sensationalistic photos that have been in the news.”