IE 11 is not supported. For an optimal experience visit our site on another browser.

TikTok to warn users about sharing misleading content

Under the new guidelines, if TikTok is unable to verify whether certain content is accurate, the content will include a label saying it includes unverified information.
Image: FILES-FRANCE-US-IT-CHINA-POLITICS-TIKTOK
The logo of the social network application Tik Tok on the screen of a phone on May 27, 2020.Martin Bureau / AFP - Getty Images file

TikTok announced new measures Wednesday to curb the spread of misinformation on the social media platform.

The company will begin adding banners on unverified content and notify users before they share potentially misleading content.

The new procedures are part of the company's efforts to advance media literacy among TikTok users, said Jamie Favazza, TikTok's director of communications for policy and safety. They go into effect in the U.S. and Canada immediately.

TikTok videos are usually fact-checked when they're flagged by users for misinformation or when they're related to Covid-19, vaccines, elections or other topics about which the spread of misleading information is common, Favazza said. If TikTok is unable to draw a conclusion about the accuracy of the information in a video using readily available information, the company works with partners at PolitiFact and Lead Stories to fact-check.

Under the new guidelines, if TikTok is unable to verify whether certain content is accurate, the content will include a label saying it includes unverified information.

In addition, if users try to share unverified content, they'll get a message with a "caution" icon asking whether they're certain they want to share it. The user will have to choose whether to cancel or to share it anyway.

The policy is a crucial intervention to stop users from sharing unverified content, said Evelyn Gosnell, the managing director at Irrational Labs, a behavioral science lab that tested the new features before they were introduced to all users.

"Your moment to intervene is so brief," Gosnell said. "That's part of why we landed on the word 'caution.' The word 'caution' immediately signals, 'Hey, I should really pay attention here.'"

Once a video is confirmed to have misleading information, it is taken off the platform, and users are given a way to appeal the decision. If a previously unverified video is found to be accurate, the banner will be taken off, and users will be able to share the content without a sharing prompt.

Irrational Labs found that the new sharing prompt led users to share misleading videos 24 percent less often.

They also found that "likes" on unverified content decreased by 7 percent when it was labeled and that the interventions worked better to stop the spread of misleading information among older users and male users.

Aaron Sharockman, the executive director of PolitiFact, a fact-checking organization affiliated with the Poynter Institute for Media Studies, said giving users the opportunity to see unverified content can also help a platform confirm whether a video is accurate or not.

"I think there can be some really neat things that kind of create a feedback loop, a channel between platform and user to kind of go and talk about what might be or might not be misinformation," he said.

TikTok's announcement also pushes the conversation forward for all social media platforms about what more can be done to prevent the spread of false information online, Sharockman said.

"What can we do to treat the original causes of what creates these problems down the road?" he said.

Gosnell already has ideas for TikTok to test to prevent the spread of misinformation even more, like introducing modifications of the "like" feature.

"Likes matter, too," she said. "It's a quick 'Should I watch this video or not?' High like count signals to watch it. I'd love to have social media apps think about that. Maybe we remove that signal in the future."