IE 11 is not supported. For an optimal experience visit our site on another browser.

Violent but vague: Videos like those linked to Highland Park shooting suspect lost in firehose of YouTube content

After the discovery of the disturbing social media posts, platforms faced renewed questions over whether content moderation can spot red flags for mass violence.
An American flag flies at half-staff near a memorial for the victims of a mass shooting at a Fourth of July parade on July 6, 2022, in Highland Park, Ill.
An American flag flies at half-staff near a memorial Wednesday for the victims of a mass shooting at a Fourth of July parade in Highland Park, Ill.Jim Vondruska / Getty Images

The videos posted on YouTube accounts associated with the suspect in the Highland Park, Illinois, shooting included the kinds of images and themes that pose a particular challenge to technology companies’ moderation efforts: violent but vague. 

The suspect, Robert E. Crimo III, was also a rapper under the name “Awake,” posting videos of himself and his music. Some videos depicted extreme and ultraviolent imagery, including scenes of shootings. Ten months ago, on a separate account that featured numerous videos of Crimo, a video of an apparent view of the parade route in the majority-Jewish suburb of Chicago where the attack occurred was posted. Other videos on the channel included narration warning viewers about what someone who appeared to be Crimo described as his unstoppable destiny.

They’re the kinds of videos that can be tough for automated moderation technology or even human moderators to catch.

“The social media company is not the one in the position to find that needle in the haystack,” said Emma Llansó, the director of the Free Expression Project at the Center for Democracy and Technology. 

Tech companies came under intense scrutiny in recent years for a hands-off approach to moderation, with very few rules about what wasn’t allowed to be posted. Extremism researchers and academics pushed for the companies to take action, most pointedly calling for them to stop boosting the spread of false information through recommendation systems and to limit how the systems connected extremist groups and amplified extremist content.

YouTube, like many other tech companies, uses a combination of human and automated moderation. It has also added new rules to ban groups such as QAnon and removed prominent white nationalist accounts.

But accounts associated with Crimo don’t necessarily fall into any of those groups. And while YouTube does have a policy against making direct threats of violence, videos can often fall into what Brian Fishman, the former policy director overseeing the implementation of Facebook’s dangerous organizations policy, calls “gray area” content, in which people discuss their motivations and frustrations without violating rules.

“It’s harder to write broad rules that can be enforced at scale across large platforms to do that,” Fishman said. 

A YouTube spokesperson said channels and videos that violated its community guidelines and creator responsibility policy were removed after the Highland Park shooting. 

In hindsight, the videos should have raised red flags, said Dr. Ziv Cohen, a forensic and clinical psychiatrist. 

Cohen, who provides evaluations for law enforcement agencies and in court cases, said there is substance to the proposition that YouTube videos and other social media content can be used to predict potential shooters.

“What helps us pick out future shooters is if we know somebody is on the pathway to violence,” Cohen said, saying the online profile associated with Crimo was “concerning” and indicative of a potential for violence. 

In one video, a person who appears to be Crimo seems to depict the aftermath of a school shooting, which ended with him draped in a U.S. flag. The person included depictions of himself holding a gun, as well as narration that suggested he might have felt destined to carry out an attack.

“If someone is showing a lot of content related to school shootings or other mass shootings, I think that is absolutely a red flag,” Cohen said. 

Mass shooting suspects like Crimo and the gunman in Uvalde, Texas, who killed 21 people, appear to have left trails of violent posts and interactions on social media platforms. The companies have faced intense scrutiny over why they didn’t notice the behavior before it turned deadly.

But the task of detecting and moderating the content isn’t simple.

On YouTube and most other tech platforms, there’s a nonstop stream of content to review, including videos with direct threats or harassment. Llansó said the scale of content uploaded to platforms like YouTube makes it impossible for humans to review every video before it gets posted online. Instead, YouTube relies on automated content moderation tools that Llansó said are “notoriously imprecise.”

Using such tools to search for content that doesn’t violate rules but could predict violence or terrorism would be difficult, and it could aggravate certain biases in policing, Llansó said.

Despite the potential for tech companies or other authorities to use social media content to predict shootings, creating technology for the task would be incredibly difficult and full of potential ethical issues, Llansó said.

“There are a lot of different ways that machine learning tools for content moderation can build in and build upon existing societal biases,” Llansó said. 

The content attributed to Crimo fell into a gray area not covered by YouTube’s guidelines, which ban videos recorded by a perpetrator during a “deadly or major violent event” and incitements to viewers to commit acts of violence, Fishman said.

Even though some content from some shooting suspects occupies that middle space, Fishman said, there are characteristics that could still be identified in material that is potentially dangerous.

“Oftentimes those sort of ‘artistic’ depictions do glorify previous attacks,” Fishman said.

Fishman, who is a senior fellow with the International Security Program at New America, a Washington, D.C., think tank, said researchers are increasingly working to differentiate between “direct threats” and those engaged in subcultures that are “pretty disgusting but don’t pose real-world threats.”

He also said that platforms might not alert authorities when creators violate rules about violent content and that content moderation usually doesn’t generate referrals to law enforcement. 

Despite the challenges social media companies face in content moderation, Fishman said, they have a responsibility to look for solutions. 

“I think that’s what they signed up for when they became large, ubiquitous platforms,” he said.