IE 11 is not supported. For an optimal experience visit our site on another browser.

'Hate is in the ether': Research finds hate is resilient on the internet

Even as social networks have vowed to do more to remove hate speech from their platforms, at least some of the people who spread it are finding ways to still organize online.

The online networks formed by hate groups may be more resilient than previously thought, and social media companies could need to adopt different tactics to keep them away, according to research released Wednesday.

Even as social networks such as Facebook have vowed to do more to remove hate speech from their platforms, at least some of the people who spread it are finding ways to still organize online, either through competing sites or other ways to avoid detection, according to the findings published in the journal Nature.

Researchers at George Washington University and the University of Miami said they found that hate groups, such as supporters of the Ku Klux Klan, are highly adaptable and organized from the bottom-up, making them difficult for any one tech company to stop.

“Instead of love being in the air, we found hate is in the ether,” Neil Johnson, the lead author of the research, said in a statement.

The findings come as online extremism has become the subject of international concern after a series of mass shootings that targeted various ethnic and religious groups.

The researchers developed a mapping model that allowed them to track online hate and study how people adapted to bans since 2017. Most of the biggest tech companies including Facebook and YouTube have implemented rules against hate speech and have begun to crack down on people and groups devoted to extremism.

Facebook said it removed 4 million pieces of hate speech in the first three months of this year, an increase from prior quarters.

But at least some people whose posts are removed may reorganize elsewhere where they can avoid censorship, the researchers found.

“We observe the current hate network rapidly rewiring and self-repairing at the micro level when attacked, in a way that mimics the formation of covalent bonds in chemistry,” the researchers say.

For example, the researchers found clusters of Ku Klux Klan supporters writing in Ukrainian on VKontakte, a Russia-based social network that’s similar to Facebook. After the Ukrainian government banned VKontakte, KKK backers reconstituted as a cluster on Facebook but with “KuKluxKlan” written in Cyrillic, “making it harder to catch with English-language detection algorithms.”

Facebook did not immediately respond to a request for comment on the research.

Johnson, who trained as a physicist before changing his research to focus on a wide array of complex systems, has previously used similar methods to study the online activity of supporters of the Islamic State militant group.

The latest research claims to be the first of its kind mapping the spread of online right-wing hate, and it says hate speech “clusters” are interconnected online across services including Instagram, Snapchat and WhatsApp, as well as across countries and languages.

Some hate speech has moved to relatively little-known services such as 8chan, leading to debates about whether such services belong on the internet and whether they have helped to inspire shootings or other forms of domestic terrorism in the United States.

The researchers suggested additional ways for social media networks to counter hate groups, including randomly banning a small fraction of individual users in order to sever their connections.

They also suggested setting up “intermediary clusters” to engage hate groups and draw out differences among them. For example, some white supremacists oppose the idea of a central European government, while others support a unified Europe under a Nazi-like regime, the researchers said.