IE 11 is not supported. For an optimal experience visit our site on another browser.

Google joins AI watermarking coalition as deepfakes hit mainstream tech platforms

Google said it will look to use a project from Adobe called Content Credentials, which adds metadata that indicates AI editing and allows viewers to verify images, videos, audio and documents.
Adobe Offices logo symbol
Adobe headquarters in San Jose, Calif., in November.David Paul Morris / Bloomberg via Getty Images file

Google said Thursday it would join a coalition of tech and media companies including Adobe, Intel and Microsoft that is advancing a way to signal when a piece of media has been created or altered by artificial intelligence.

Google said it will look to use a project from Adobe called Content Credentials, which gives creators the opportunity to add a small “CR” symbol to AI media. That symbol links to  information about when, where and how the media was edited. It would function as a form of metadata that indicates AI editing and allows viewers to verify images, videos, audio and documents.

The group, the Coalition for Content Provenance and Authenticity, or C2PA, doesn’t support enforcing such measures on all content, instead offering it as a way for institutions like news organizations and social media platforms to share trusted digital media in an age of rapidly evolving AI technology that can create realistic fake media and alter real media convincingly.

“The way we think we’re trying to solve the problem is first, we want to have you have the ability to prove as a creator what’s true,” said Dana Rao, who leads Adobe’s legal, security and policy organization and co-founded the coalition. “And then we want to teach people that if somebody is trying to tell you something that is true, they will have gone through this process and you’ll see the ‘CR,’ almost like a ‘Good Housekeeping’ seal of approval.”

The evolution of AI technology has made it possible to automate editing tasks that were previously time consuming and technically difficult, putting the ability to create synthetic media in the hands of millions of people. That has opened the door to creative endeavors as well as more nefarious efforts including disinformation and sexual abuse.

That has sparked some efforts to either rein in the technology or make it clearer when something has been created by AI. One such idea, watermarking, aims to add signals, some obvious and other subtle, that would make it easier to discern real from fake.

Google has created a suite of generative AI consumer products like Bard, an AI chatbot, and AI editing and creation tools. It also owns YouTube, which an NBC News investigation found hosts fake news channels with millions of views that use similar AI tools to pump out false content.

“At Google, a critical part of our responsible approach to AI involves working with others in the industry to help increase transparency around digital content,” said Laurie Richardson, vice president of trust and safety at Google, in a press release about Google joining the C2PA.

“This is why we are excited to join the committee and incorporate the latest version of the C2PA standard. It builds on our work in this space — including Google DeepMind’s SynthID, Search’s About this Image and YouTube’s labels denoting content that is altered or synthetic — to provide important context to people, helping them make more informed decisions.”

The other tech companies that have joined the C2PA, like Microsoft, have similarly created and invested in consumer generative-AI technology. Adobe itself pioneered digital media-editing tools before the popularization of AI, which Adobe now incorporates into its products, lowering the barrier of entry to create sophisticated, manipulated media at scale.

These rapid advancements have not been free from consequences. On Google’s and Microsoft’s search engines, nonconsensual sexually explicit deepfake images of celebrity women can be found in the top search results for their names and the word “deepfakes.” Such material, which includes AI-edited real photos and entirely AI-generated photos, as well as videos that use AI to “swap” faces and clone voices, has flooded mainstream social media platforms and evaded legal and criminal recourse.

Rao said Content Credentials would be useful for verifying newsworthy media in situations like war zones, and pointed to the Israel-Hamas war as an ongoing example of a conflict where doubt has emerged over whether images online are real or manipulated. Some news media organizations have become C2PA members, including The New York Times and the BBC, and some camera companies like Canon are building Content Credentials into their technology.

“The media literacy piece of this is critical,” Rao said. “The cultural change we need is to verify, then trust.”