IE 11 is not supported. For an optimal experience visit our site on another browser.

A 'cheapfake' Dr. Oz poster went viral on social media. The fact check did not.

Deepfakes generate concern about their potential to interfere with the democratic process. But a more lightweight form of synthetic media is already influencing political realities.
Republican Senate candidate Mehmet Oz
Republican Senate candidate Mehmet Oz speaks during a news conference in Philadelphia on Tuesday. Hannah Beier / Bloomberg via Getty Images

In late August, a photo of U.S. Senate hopeful Dr. Mehmet Oz went viral. In it, Oz stands surrounded by what appears to be restaurant workers, one of whom is holding an “OZ” sign perpendicularly so that it reads “NO.” It turned out to be a doctored image — but fact-checking didn’t surface until after the image had been shared tens of thousands of times. 

A bit earlier in the month, during “Tucker Carlson Tonight” on Fox News, host Brian Kilmeade aired an edited photo of Judge Bruce Reinhart — who authorized the FBI’s search of Mar-a-Lago — seemingly receiving a foot massage from Ghislaine Maxwell inside a private jet.

Ever since deepfakes started appearing in the mid-2010s, they have generated concern about their potential to interfere with the democratic process by manipulating the truth online. So far, their impact on the political process has been relatively limited. That’s not the case, however, with a more lightweight form of synthetic media often called “cheapfakes,” which are already used to influence and reshape political realities.

The Kilmeade and Oz cheapfakes are reminders that it doesn’t take a high-fidelity deepfake to generate mis- and disinformation. And it’s never been easier for the average person to create believable forgeries. Amid hotly contested midterm elections in a polarized political landscape, this content could travel faster than ever, bringing with it the potential to mislead the public and entrench division, obstructing the transmission of factual information heading into Election Day.

Deepfakes and cheapfakes share rhyming names and an intent to deceive, but in practice they don’t have much in common.

Deepfakes involve applying a form of machine learning called generative adversarial networks (GANs) to believably fake or substitute faces and voices in video. This technology was first used on adult film performers, but other notable examples have emerged in arts and entertainment. These include Bill Posters and Daniel Howe’s art project, “Big Dada,” which depicts celebrities like Mark Zuckerberg and Kim Kardashian commenting on data and surveillance; and ctrl shift face’s 2019 deepfake in which Bill Hader’s face morphs into Tom Cruise and Seth Rogen’s as the comedian does impressions of them. More recently, Chris Ume went viral on TikTok for his series of Tom Cruise deepfakes.

Cheapfakes, on the other hand, are manipulated photos and videos that involve conventional audio and visual editing techniques rather than artificial intelligence. In addition to the recent Oz and Kilmeade examples, maybe the most effective example in recent memory was the 2019 “drunk” Nancy Pelosi video, which involved simply slowing down the playback speed to make the House speaker appear impaired.

Deepfakes involve more cutting-edge technology, but the impacts of cheapfakes have arguably driven more political disinformation. They are cheaper and demand less technical expertise to create. This means they can be created rapidly, in large volumes, and shared in real time, while news events and narratives are still developing in public discourse. It is much harder for deepfakes to have this effect (at least for now). Cheapfakes have been effectively deployed to incite genocide against the Rohingya Muslims in Myanmar, spread Covid disinformation, and even to sell car insurance using shoddy audio dubs over videos of President Joe Biden and former White House press secretary Jen Psaki. 

Though government agencies and tech platforms have respectively passed laws and implemented policies about deepfakes, the reaction to cheapfakes has been considerably weaker. Complicating matters further, the motivation for someone to create a cheapfake can run the gamut of disinformation to parody. As such, they can occupy a complicated gray area for social media platforms. Facebook, where the Reinhart image originated, has applied an “Altered Photo” label and included links to various fact-checking sources. The photo of Oz, on the other hand, still does not have any type of “misleading media” notice on Twitter.

These two examples serve as templates for understanding when and why cheapfakes might be deployed to obstruct the information ecosystem and influence public opinion. Both played into existing cultural divides, amid highly charged moments, ultimately diverting attention away from fact-based debate and toward emotional outcomes (outrage and comedy).

The Pennsylvania Senate race between Oz and Pennsylvania Lt. Gov. John Fetterman has become part of the national conversation because of Oz’s celebrity and Fetterman’s deft use of social media, perhaps also conditioning audiences to believe that such a prank could be real. 

Want more articles like this? Follow THINK on Instagram to get updates on the week’s most important political analysis 

Judge Reinhart served as a defense attorney representing accomplices of Jeffrey Epstein in 2008. Epstein persists as a prominent figure in myriad conspiracy theories across the political spectrum, which were amplified during Ghislaine Maxwell’s trial and sentencing. Given this built-in appetite, a photo appearing to prove the link to Epstein becomes the perfect vehicle for fomenting anti-government suspicion about the motives behind the Mar-a-Lago search.

It’s a depressing technological development, but we are not without solutions. There are a few tactics people can use to determine if a piece of content is a cheapfake. The first is a careful audio-visual review of the media in question. Focus on natural details by asking questions like, does the lighting look right? Is skin tone consistent? Does this person’s head look too big or small for their neck or body? Does the voice sound right? Is there a consistent relationship between subjects and environment? 

Given this built in appetite, a photo appearing to prove the link to Epstein becomes the perfect vehicle for fomenting anti-government suspicion,

One prominent forensic method for identifying synthetic media, developed by digital literacy expert Mike Caufield, is known as “SIFT:” Stop, Investigate (the source sharing the content), Find (trusted coverage regarding the subject matter), Trace (to the original piece of media). Digital tools like InVID, a browser plugin for Chrome and Firefox, can also aid fact-checking efforts.

Ultimately, this media only goes viral if we reflexively share it. This is especially likely to happen in situations that involve strong emotions. With ongoing developments in the Trump classified-documents scandal, a strong movement building in response to the overturning of Roe v. Wade, and mounting pressure on far-right extremist groups that participated in the Jan. 6 insurrection — to name just a handful of lightning rods — it’s doubtful that the Oz and Reinhart instances will be the last we see of viral cheapfakes this midterm season. It is critical we stay vigilant and correct counterfactual media when we encounter it.

And the stakes are higher than just 2022. AI creative tools are improving at a staggering pace. If 2024 does end up being the election where Americans confront effective, mass-scale deepfakes, we’ll be thankful we got a head start in upping our media literacy to deflate their viral potency today.