The videoconferencing app Zoom said Monday it won’t use customers’ data without their consent to train artificial intelligence, addressing privacy concerns of a growing number of customers over new language in the app’s terms of service.
In Section 10.4 of Zoom’s terms of service, updated in March, users agree to “grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license” for various purposes, including “machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof.”
An article Sunday from Stack Diary, a tech publication, highlighted the updated terms, sparking concerns.
Among the ways Zoom now uses AI are the Zoom IQ Meeting Summary, which provides automated meeting summaries, and services like automated scanning of webinar invitations to detect spam activity, Chief Product Officer Smita Hashim said in a blog post Monday.
The blog post emphasized that meeting administrators can opt out of sharing meeting summaries data with Zoom. Non-administrator meeting members are notified about Zoom’s new data-sharing policies and are given the option to accept or leave meetings.
“Zoom customers decide whether to enable generative AI features, and separately whether to share customer content with Zoom for product improvement purposes,” a Zoom spokesperson said in a statement. “We’ve updated our terms of service to further confirm that we will not use audio, video, or chat customer content to train our artificial intelligence models without your consent.”
But data privacy advocates and some Zoom users are sounding the alarm and say the new language needs to be revised. Some users said they would cancel their Zoom accounts, while others demanded that Zoom change its terms or offer everyone, not just meeting administrators, the option to opt out of the use of their data for AI training. It is optional to use Zoom’s AI features, which would trigger the data collection.
Despite the company’s statement about the update, users still expressed concern online.
The criticism underscores the growing public scrutiny of AI, specifically concerns over how people’s data and content could be used to train AI large language models without their consent or without their receiving compensation.
Janet Haven, the executive director of Data & Society, a nonprofit research institute, and a member of the National AI Initiative advisory committee, said concerns over the emerging tech go beyond Zoom’s terms of service and represent long-standing concerns over data privacy.
“I think that the fundamental issue is that we don’t have those protections in law as a society in place and in a kind of robust way, which means that people are being asked to react at the individual level. And so that is the real problem with terms of service,” Haven said.
Aric Toler, the director of training and research at Bellingcat, an open-source research publication, said Bellingcat would no longer use Zoom Pro, a subscription that costs $149.90 annually per user, even after Zoom reassured users it wouldn’t use customer data without consent.
“Even if the current constraints of the terms of service keep the AI training to data from only opting in, it’s still worrying enough that it’s better that we divorce from them now rather than later when there are further, worrying developments,” Toler said.
Bellingcat relied on Zoom to host training workshops and webinars for hundreds of journalists, researchers and students, Toler said. He said Bellingcat would look to other video communication platforms, such as Jitsi Meet, Google Meet and Microsoft Teams, and review their data policies.
Toler’s sentiments were echoed across social media, which Haven said reflects “a growing societal understanding of the lack of protections for comprehensive data privacy that we have in law.”
Gabriella Coleman, an anthropology professor at Harvard University and a faculty associate at the Berkman Center for Internet and Society, said in a post with 1.3 million views on X, the social platform formerly known as Twitter, “Well time to retire @Zoom, who is basically wants to use/abuse you to train their AI,” in response to the Stack Diary article.
In another post on X, writer and director Justine Bateman wrote that she would never “use @Zoom again” until the company changes its updated terms that allow it to use customer content and data to train AI.
Haven said the reaction from Zoom customers isn’t unexpected, given the lack of data protection laws and regulations about AI.
“Regardless of what Zoom’s clarification was, I think what that really raised in the public discourse was the level of discomfort that so many people have in recognizing that our laws don’t protect us against any kind of misuse of our data,” Haven said.
Bogdana Rakova, a senior trustworthy AI fellow at the Mozilla Foundation, a nonprofit group that publishes research projects about AI, said there should be more transparency and public discourse about how AI is being integrated in companies’ products and services.
Rakova said people don’t pay attention to terms of service and aren’t always notified when they are changed. Zoom’s terms of service were changed in March and became effective July 27.
“These are documents that are intentionally written in a way that no sane human will spend their time looking at them,” Rakova said. “It’s not clear when people are notified about changes, and this makes it very complex for consumers and puts the burden on consumers to single-handedly navigate this. It’s extremely challenging.”