IE 11 is not supported. For an optimal experience visit our site on another browser.

Meta to require political advertisers to disclose when they use AI

Starting next year, advertisers on Facebook and Instagram will have to say during the buying process whether they are using media that was digitally created or altered.
Image: Facebook Covers Sign At Menlo Park Headquarters
A pedestrian walks past the Meta logo in front of the company's headquarters in Menlo Park, Calif., in 2021.Justin Sullivan / Getty Images file

SAN FRANCISCO — Meta said Wednesday it would begin forcing political advertisers to disclose when they use altered or digitally created media, such as a deepfake video of a candidate, as the tech industry braces for a wave of video, images and audio made with artificial intelligence ahead of the 2024 election. 

Meta, which owns Facebook and Instagram, said in a blog post that it would require advertisers to disclose during the ad-buying process “whenever a social issue, electoral, or political ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered.” 

Nick Clegg, Meta’s president for global affairs, said in a statement that the policy would go into effect worldwide early next year — just in time for the 2024 presidential primaries and caucuses. 

The social media company said in the blog post, “If we determine that an advertiser doesn’t disclose as required, we will reject the ad and repeated failure to disclose may result in penalties against the advertiser.” Meta also said it would put a label on such ads. 

The policy stops short of banning altered media altogether — in effect, conceding that AI-generated media is here to stay. In April, the Republican National Committee used AI to create a 30-second ad imagining a second term for President Joe Biden. In March, critics of former President Donald Trump circulated fake AI-generated images of Trump being arrested. 

But if advertisers use synthesized media, they will need to disclose it to ensure people are not misled, Meta said. 

The new policy echoes a similar move Google announced in September requiring advertisers to disclose “synthetic” media. Google and Meta are the two largest internet ad companies by total sales, so their decisions can become de facto standards online. 

Meta has been embroiled in fights over altered videos for years. In 2019, Facebook refused to take down doctored videos of then-House Speaker Nancy Pelosi, D-Calif., prompting Pelosi to accuse the California company of lying to the public. The platform changed its policies the next year to ban or label certain posts with manipulated media. 

But advances in generative AI in the past year have led to more realistic fakes created with far less effort than a few years ago, posing a challenge to online platforms, candidates and voters. 

Meta still bans manipulated media in some cases laid out in its rulebook for users

It said the new advertising policy will apply to situations such as depicting “a real person as saying or doing something they did not say or do” — a form of advanced video or audio editing known as deepfake technology, used recently to impersonate well-known figures such as Tom Hanks

The policy will also apply to ads that “depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened,” and to ads that “depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.” 

The disclosure requirement will not apply if the digital editing is “inconsequential or immaterial” to the issues raised in an ad, Meta said.