IE 11 is not supported. For an optimal experience visit our site on another browser.

Top AI companies agree to work together toward transparency and safety, White House says

Seven companies, including Alphabet, Meta and OpenAI, have agreed to hire independent experts to probe their systems for vulnerabilities and share information with one another, governments and researchers.
Meta Platforms signage outside the company's headquarters in Menlo Park, Calif., Oct. 2021.
Meta headquarters in Menlo Park, Calif., in 2021.Nick Otto / Bloomberg via Getty Images file

Seven leading artificial intelligence companies have agreed to a handful of industry best practices, a first step toward more meaningful regulation, the White House announced Thursday.

The companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — have agreed to the principles that include security, transparency with the public and testing of their products internally before debuting them to the public.

In a call with reporters Wednesday evening previewing the announcement, a White House official, who requested to not be named as part of the terms of the call, said that President Joe Biden will eventually sign an executive order more strongly regulating AI, though officials are still working out the details.

“The White House is actively developing executive action to govern the use of AI for the president’s consideration,” the official said. “This is a high priority for the president.”

Top executives from the companies involved will also be meeting with the White House on Friday, including Microsoft President Brad Smith, Google and Alphabet President of Global Affairs Kent Walker, Amazon Web Services CEO Adam Selipsky, Meta President of Global Affairs Nick Clegg, OpenAI President Greg Brockman, Anthropic CEO Dario Amodei, and Inflection AI CEO Mustafa Suleyman.

In remarks to reporters before his meeting with the executives, Biden said the commitments start immediately and will help the industry fulfill their “fundamental obligation to Americans to develop safe, secure and trustworthy technologies that benefit society and uphold our values.”

The seven companies have each agreed to hire independent experts to probe their systems for vulnerabilities and to share information they discover with each other, governments and researchers, a White House official said on Wednesday. They also agreed to develop so-called “watermarking” mechanisms to help users identify when content they see or hear is generated by AI.

The commitments are voluntary and are not binding.

“The companies have a duty to earn people’s trust and empower users to make informed decisions, labeling content that has been altered or AI-generated, rooting out bias and discrimination, strengthening privacy protections and shielding children from harm,” Biden said Thursday.

Gary Marcus, a leading artificial intelligence critic, said that the commitments were important but noted that they don’t compel AI companies to say what data they’re using to train their models.

“I think it’s a great first step and it doesn’t go far enough,” Marcus said. “First of all, because it’s voluntary. Secondly, one of the most important things we need here is what data are being used to train the models, and that’s not part of this.”

“Red teaming is great. Having the companies share information is terrific. An agreement about watermarking is terrific. These are all good steps. But until we have real transparency around data we’re not done,” he added.

Generative AI systems like OpenAI’s ChatGPT became a sensation late last year, prompting a swarm of new users and leaving the U.S. government scrambling to find an appropriate role. In May, OpenAI CEO Sam Altman courted members of Congress and asked them to regulate the industry. In June, Biden flew to San Francisco to meet with leaders from some of the Silicon Valley companies leading AI in the U.S.

In his Thursday remarks, Biden said that people must be "clear-eyed and vigilant" about how emerging technologies could pose threats to democracy, also referencing social media threats.

"Social media has shown us the harm that powerful technology can do without the right safeguards in place," the president said.

The White House official said that the emphasis of the agreement is on creating transparency that will allow experts to better scrutinize AI systems.

“What the companies are committing to is independent analysis by domain experts, and setting up a broader, multifaceted regime to ensure that that analysis is credible and trustworthy,” the official said. “And they’re also committing, more generally, to engaging with academia and civil society and the U.S. and other governments on establishing best practices for safeguarding these systems and then adhering to those practices.”