Microsoft CEO Satya Nadella said Friday that the company has to "move fast" on combatting nonconsensual sexually explicit deepfake images, after AI-generated fake nude pictures of Taylor Swift went viral this week.
In an exclusive interview with NBC News' Lester Holt, Nadella commented on the "alarming and terrible" deepfake images of Swift posted on X that by Thursday had been viewed more than 27 million times. The account that posted them was suspended after it was mass-reported by fans of Swift.
"Yes, we have to act," Nadella said in response to a question about the deepfakes of Swift. "I think we all benefit when the online world is a safe world. And so I don’t think anyone would want an online world that is completely not safe for both content creators and content consumers. So therefore, I think it behooves us to move fast on this."
Read more on this story at NBCNews.com and watch “NBC Nightly News with Lester Holt” tonight at 6:30 p.m. ET/5:30 p.m. CT.
X didn't respond to an NBC News request for comment about the deepfake images of Swift, while the singer's representative declined to comment on the record.
Microsoft has invested into and created artificial intelligence technology of its own, including being a primary investor in OpenAI — one of the leading AI organizations, which created ChatGPT — as well as tools integrated within Microsoft products, like Copilot, an AI chatbot tool on Microsoft's search engine, Bing.
"I go back to what I think's our responsibility, which is all of the guardrails that we need to place around the technology so that there's more safe content that's being produced," Nadella said. "And there's a lot to be done and a lot being done there."
"But it is about global, societal, you know, I’ll say convergence on certain norms," he continued. "Especially when you have law and law enforcement and tech platforms that can come together, I think we can govern a lot more than we give ourselves credit for."
404 Media reported that the deepfake images of Swift that went viral on X were traced back to a Telegram group chat, where members said they used Microsoft's generative-AI tool, Designer, to make such material. NBC News has not independently verified that reporting. Nadella didn't comment directly on the 404 Media's report, but in a statement to 404 Media Microsoft said it was investigating the reports and would take appropriate action to address them.
“Our Code of Conduct prohibits the use of our tools for the creation of adult or non-consensual intimate content, and any repeated attempts to produce content that goes against our policies may result in loss of access to the service," Microsoft said in its statement to 404 Media. "We have large teams working on the development of guardrails and other safety systems in line with our responsible AI principles, including content filtering, operational monitoring and abuse detection to mitigate misuse of the system and help create a safer environment for users.”
After this article was published, Microsoft provided an updated statement saying, "We take these reports very seriously and are committed to providing a safe experience for everyone. We have investigated these reports and have not been able to reproduce the explicit images in these reports. Our content safety filters for explicit content were running and we have found no evidence that they were bypassed so far. Out of an abundance of caution, we have taken steps to strengthen our text filtering prompts and address the misuse of our services."