Videos apparently by subway shooting suspect back on YouTube after platform took them down

It’s unclear how YouTube might crack down on potentially harmful content that is uploaded again following a tragedy such as the Brooklyn, New York, subway shooting.

SHARE THIS —

Videos apparently made by the man suspected of shooting into a New York City subway train continue to be uploaded to YouTube, despite the platform taking down the original channel, which had hosted videos expressing bigoted views, violence and criticism.

Frank R. James, 62, was arrested Wednesday and is charged with committing a terrorist attack or other violence against a mass transportation system. Police shared screenshots from the YouTube channel prophet oftruth88, a space where James appeared to go on rants using profanity and racial slurs while also espousing violent thoughts.

The channel was taken down by Wednesday afternoon, but it seems clips have been re-uploaded by multiple users since the account was discovered, allowing the content to continue spreading. The resurfaced clips highlight some of the challenges platforms face in trying to moderate content and keep videos like them out of public view after they have already appeared online. Experts say videos are more difficult to review than text posts on social media.

One clip, viewed more than 17,000 times, includes violent language and racial slurs against Spanish speakers. That video was taken down after NBC News shared it with Google in an email Thursday.

Another video, which was viewed more than 11,000 times by Thursday afternoon, included a rant disparaging white women and expressing violent ideation about them.

Both appear to clearly violate YouTube's community guidelines on hate speech and could violate guidelines on harassment.

Google, which owns YouTube, did not immediately respond to a request for comment on whether it was using auto-detection software to prevent and take down such videos.

YouTube said in 2020 that it was using a combination of machine detection and human review to swiftly remove and moderate content, including content that spread Covid-19 misinformation. But it's unclear how the platform utilizes its system to crack down on harmful content that is uploaded again following a tragedy such as the subway shooting.

Platforms like YouTube have previously used a system called hash matching to keep certain videos off of its platform, according to Emma Llansó, director of the Center for Democracy and Technology’s Free Expression Project. Video taken during the 2019 mass shooting in Christchurch, New Zealand, was hash matched in an attempt to prevent anyone from uploading the content to various tech platforms, though oftentimes altered recordings slip through the cracks.

But while the Christchurch video was clearly displaying a horrific act of violence, videos like those thought to be from James pose a more complicated question for companies, Llansó said.

“They basically have to decide: Has what this person has done been so ... beyond the pale that nothing they have ever said on our platform should be accessible anymore?” Llansó said. “And I think you’ll see a lot of really, you know, differences of opinions among, whether it’s news reporters, where people who study this kind of violent action and want to be able to review what was this person saying a week ago, six months ago, a year ago.”

YouTube confirmed in a statement to Insider that the account was removed in accordance with its creator responsibility guidelines, which can include behavior that happens off of its platform.

While it's difficult to know what is happening internally at YouTube in the aftermath of the shooting, Llansó said it's possible that removing the account has little to do with the actual content posted.

"So when they make the decision because of the identity of the account holder — 'this person just committed a heinous act; we are taking down this account' — there's still a lot of potential sort of review of content and application of these rules to each individual video that they might be going through," Llansó said. "And again, I don't know their process."

Moderating video content poses a much more difficult challenge than moderating text posts on social media, which puts YouTube in a more difficult position than companies like Facebook or Twitter, said Michael Karanicolas, executive director of the Institute for Technology, Law and Policy at UCLA.

"Videos are easier to manipulate and change slightly in a way that evades content bans," Karanicolas said. "Videos are more difficult to review, and filtering technologies, such as they exist, don't work as well on videos. And when they are applied to videos, there's a much more significant risk of collateral damage."

While there's sometimes a comparison made to copyrighted content, such as music or films, that are taken down from YouTube, those detection softwares also include significant amounts of false positives, according to Karanicolas. Content that violates copyright law forces platforms like YouTube to act more aggressively due to legal liability.

While hate speech, harassing speech and violent speech are against the platform's community guidelines, they don't have the same government oversight as copyright issues because they're constitutionally protected.

There's room for conversation about the harm that comes from allowing such speech to be on a platform as large as YouTube, which has every right to create its own rules, Karanicolas said. It's less clear to know the impact on others of the videos apparently from James remaining on the platform following Tuesday's shooting.

"There's a famous saying that the internet is forever and that even if YouTube takes material down, it is inevitably going to pop up on other video-sharing platforms, which are not as vigilant in their moderation or which don't apply the same standards that YouTube does," Karanicolas said. "So, yes, these things will always find a home. The flip side to that is that amplification is also a relevant metric."