IE 11 is not supported. For an optimal experience visit our site on another browser.

Facebook and Instagram will label more AI-made images ahead of November election

Meta, which owns the two apps, said it can detect many AI images automatically, but it said finding and labeling AI-made audio and video will be harder.
Meta Platforms signage outside the company's headquarters in Menlo Park, Calif., Oct. 2021.
People walk past a Meta Platforms sign outside the company's headquarters in Menlo Park, Calif., in 2021.Nick Otto / Bloomberg via Getty Images file

The parent company of Facebook and Instagram said Tuesday it would ramp up its use of labels on artificial intelligence-generated images ahead of the November election but warned it doesn’t yet have the ability to easily detect audio and video made with AI. 

Meta said in a blog post that people using its apps want transparency around the quickly improving technology known as generative AI and that the company’s answer for now is to apply a label, “Imagined with AI,” whenever possible. 

“It’s important that we help people know when photorealistic content they’re seeing has been created using AI,” Nick Clegg, Meta’s president for global affairs, wrote in the blog post. 

Clegg wrote that in the coming months, Meta would start applying the labels to images on Facebook, Instagram and Threads. He said the labels would appear in all languages supported by each app. 

The timing coincides with the U.S. elections this year, including November’s presidential race, as well as elections in more than 50 other countries, such as India and Mexico. 

“We’re taking this approach through the next year, during which a number of important elections are taking place around the world,” wrote Clegg, a former British deputy prime minister. “During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve.”

Experts have warned that disinformation — including audio, video and images made with AI — poses an unprecedented threat in 2024 and that Americans are ill-prepared for what’s coming. 

In one example last month, someone created a robocall from a fake President Joe Biden telling New Hampshire residents not to vote, experts said. 

Clegg said Meta is optimistic about quickly detecting AI-generated images, including those made with software from other companies. He said new industrywide technical standards will make it possible for Meta to detect images made with AI software from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock. 

The same isn’t true with video and audio, he wrote. 

“While companies are starting to include signals in their image generators, they haven’t started including them in AI tools that generate audio and video at the same scale, so we can’t yet detect those signals and label this content from other companies,” he wrote. 

In the meantime, he said, Meta would ask people to disclose when they share AI-made video or audio so Meta can add a label. 

“We may apply penalties if they fail to do so,” he warned.