IE 11 is not supported. For an optimal experience visit our site on another browser.
EXCLUSIVE
Tech News

On Musk's Twitter, users looking to sell and trade child sex abuse material are still easily found

A review conducted by NBC News found dozens of accounts and hundreds of tweets claiming to sell child sexual abuse material.
Photo illustration of Elon Musk and a silhouette of a young boy. Scratch marks and orange dots surround the boy.
Elon Musk said cracking down on child sexual exploitation material was a top priority. NBC News found accounts openly promoting the sale and trade of such material.Chelsea Stahl / NBC News; Getty Images

Twitter accounts that offer to trade or sell child sexual abuse material under thinly veiled terms and hashtags have remained online for months, even after CEO Elon Musk said he would combat child exploitation on the platform.

Priority #1,” Musk called it in a Nov. 20 tweet. He’s also criticized Twitter’s former leadership, claiming that they did little to address child sexual exploitation, and that he intended to change things.

But since that declaration, at least dozens of accounts have continued to post hundreds of tweets in aggregate using terms, abbreviations and hashtags indicating the sale of what Twitter calls child sexual exploitation material, according to a count of just a single day’s tweets. The signs and signals are well known among experts and law enforcement agencies that work to stop the spread of such material.

The tweets reviewed by NBC News offer to sell or trade content that is commonly known as child pornography or child sexual abuse material (CSAM). The tweets do not show CSAM, and NBC News did not view any CSAM in the course of reporting this article.

Some tweets and accounts have been up for months and predate Musk’s takeover. They remained live on the platform as of Friday morning. 

Many more tweets reviewed by NBC News over a period of weeks were published during Musk’s tenure. Some users tweeting CSAM offers appeared to delete the tweets shortly after posting them, seemingly to avoid detection, and later posted similar offers from the same accounts. Some accounts offering CSAM said that their older accounts had been shut down by Twitter, but that they were able to create new ones.

According to Twitter’s rules published in October 2020, “Twitter has zero tolerance towards any material that features or promotes child sexual exploitation, one of the most serious violations of the Twitter Rules. This may include media, text, illustrated, or computer-generated images.”

In an email to NBC News after this article was published, Ella Irwin, Twitter’s vice president of product overseeing trust and safety, said “We definitely know we still have work to do in the space, and certainly believe we have been improving rapidly and detecting far more than Twitter has detected in a long time but we are deploying a number of things to continue to improve.” Irwin asked that NBC News provide the findings of its investigation to the company so that it could "follow up and get the content down."

It’s unclear just how many people remain at Twitter to address CSAM after Musk enacted several rounds of layoffs and issued an ultimatum that led to a wave of resignations. Musk has engaged some outside help, and the company said in December that its suspension of accounts for child sexual exploitation had risen sharply. A representative for the U.S. child exploitation watchdog the National Center for Missing and Exploited Children said that the number of reports of CSAM detected and flagged by the company remains unchanged since Musk’s takeover.

Twitter also disbanded the company’s Trust and Safety council, which included nonprofits focused on addressing CSAM. 

Twitter’s annual report to the Securities and Exchange Commission said the company employed more than 7,500 people at the end of 2021. According to internal records obtained by NBC News, Twitter’s overall headcount had dwindled to around 1,340 active employees as of early January, with around 20 people working in the company’s Trust & Safety organization. That is less than half of the previous Trust and Safety workforce.

One former employee who worked on child safety issues, a specialization that fell under a larger Trust and Safety group, said that many product managers and engineers who were on the team that enforced anti-CSAM rules and related violations before Musk’s purchase had left the company. The employee asked to remain anonymous because they had signed a nondisclosure agreement. It’s not known precisely how many people Musk has assigned to those tasks now.

Since Musk took over the platform, Twitter cut the number of engineers at the company in half, according to internal records and people familiar with the situation.

Irwin said in her email that "many employees who were on the child safety team last year are no longer part of the company but that primarily happened between January and August of last year due to rapid attrition Twitter was experiencing across the company." Additionally, she said that the company has "roughly 25% more staffing on this issue/ problem space now than the company had at its peak last January. 

CSAM has been a perpetual problem for social media platforms. And while some technology has been developed to automate the detection and removal of CSAM and related content, the problem remains one that needs human intervention as it develops and changes, according to Victoria Baines, an expert on child exploitation crimes who has worked with the U.K.’s National Crime Agency, Europol, the European Cybercrime Centre and Facebook. 

“If you lay off most of the trust and safety staff, the humans that understand this stuff, and you trust entirely to algorithms and automated detection and reporting means, you’re only going to be scratching the surface of the CSAM phenomenon on Twitter,” Baines said. “We really, really need those humans to pick up the signals of what doesn’t look and sound quite right.” 

The accounts seen by NBC News promoting the sale of CSAM follow a known pattern. NBC News found tweets posted as far back as October promoting the trade of CSAM that are still live — seemingly not detected by Twitter — and hashtags that have become rallying points for users to provide information on how to connect on other internet platforms to trade, buy and sell the exploitative material. 

In the tweets seen by NBC News, users claiming to sell CSAM were able to avoid moderation with thinly veiled terms, hashtags and codes that can easily be deciphered. 

Some of the tweets are brazen and their intention was clearly identifiable (NBC News is not publishing details about those tweets and hashtags so as not to further amplify their reach).  While the common abbreviation “CP,” a ubiquitous shortening of “child porn” used widely online, is unsearchable on Twitter, one user who had posted 20 tweets promoting their materials used another searchable hashtag and wrote “Selling all CP collection,” in a tweet published on Dec. 28. The tweet remained up for a week until the account appeared to be suspended following NBC News’ outreach to Twitter. A search Friday found similar tweets still remaining on the platform. Others used keywords associated with children, replacing certain letters with punctuation marks like asterisks, instructing users to direct message their accounts. Some accounts even included prices in the account bios and tweets.

None of the accounts reviewed by NBC News posted explicit or nude photos or videos of abuse to Twitter, but some posted clothed or semi-clothed images of young people alongside messages offering to sell “leaked” or “baited” images. 

Many of the accounts using Twitter to promote harmful content cited the use of virtual storage accounts on MEGA, an encrypted file sharing site based in New Zealand. The accounts posted videos of themselves scrolling through MEGA, showing folder names suggesting child abuse and incest.  

In a statement, MEGA Executive Chairman Stephen Hall said that the company has a “zero tolerance” policy toward CSAM on the service. “If a public link is reported as containing CSAM, we immediately disable the link, permanently close the user’s account, and provide full details to the New Zealand authorities, and any relevant international authority,” Hall said. “We encourage other platforms to provide us with any signals they become aware of so we can take action on Mega. Similarly, we provide others with information that we receive.”

The issue of CSAM in relation to MEGA and Twitter has triggered at least one prosecution in the U.S.

A June 2022 Department of Justice press release announcing the sentencing of an individual convicted of “transporting and possessing thousands of images depicting child sexual abuse” described how Twitter was used by the individual. 

“In late 2019, as part of an ongoing investigation, officers identified a Twitter user who sent two MEGA links to child pornography,” the press release said. The release said the individual “admitted to viewing child pornography online and provided investigators with his MEGA account information. The account was later found to contain thousands of files containing child pornography.”

Nearly all of the tweets viewed by NBC News that advertised or promoted CSAM used hashtags that referred to MEGA or another similar service, allowing users to search and locate their tweets. Despite the hashtags being active for months, they remain searchable on the platform. 

The problem has been pervasive enough to catch the attention of some Twitter users. In 25 tweets, users tagged Musk using at least one of the major hashtags to alert him to the content. The earliest tweet flagging the hashtag to Musk via his user name said, “@elonmusk I doubt you’ll see this, but it’s come to my attention that [this] hashtag has quite a few accounts asking for / selling cp. I was going to report them all but there’s too many, even more in replies. Just a heads up.”

Historically, Twitter has taken action against some similar hashtags, such as one hashtag related to cloud storage service Dropbox that appears to now be restricted in Twitter search. In a statement, a Dropbox representative said, “Child sexual exploitation and abuse has no place on Dropbox and violates our Terms of Service and Acceptable Use Policy. Dropbox uses a variety of tools, including industry-standard automated detection technology, and human review, to find potentially violating content and action it as appropriate.”

Automated systems used by many social media platforms were originally created to detect images and prevent their continued distribution online. 

Facebook has used technology called PhotoDNA, alongside human content moderators, for a decade to detect and prevent the distribution of CSAM. 

Automated technologies have been developed at various companies to scan and detect text that could be associated with CSAM. WhatsApp, a Meta-owned company, says it uses machine learning to scan text in new profiles and groups for such language.

The former Twitter employee said that the company had been working to improve automated technology to block problematic hashtags. But they emphasized that it would need human input to flag new hashtags and for enforcement. 

“Once you know the hashtags you’re looking for, detecting hashtags for moderation is an automated process. Identifying the hashtags that are possibly against the policies requires human input.” they told NBC News. “Machines aren’t generally taught today to automatically infer whether a hashtag that hasn’t been seen before is possibly connected or being used by people looking for or sharing CSAM — it’s possible, but it’s usually quicker to use an expert’s input to add a hashtag that is being misused into detection tools than wait for a model to learn it.”

In her email to NBC News, Irwin confirmed that "hashtag blocking was deployed weeks ago" and noted that some human moderation was required. "Over time, once we feel the precision is sufficient it will be automated," she added. 

Equally as important, said Baines and the former employee, is the fact that text-based detection could overcorrect, or pose potential free speech issues. MEGA, for instance, is used for many types of content besides CSAM, so the issue of how to moderate hashtags referring to the service isn’t straightforward.

“You need humans, is the short answer,” Baines said. “And I don’t know if there’s anyone left doing this stuff.”