IE 11 is not supported. For an optimal experience visit our site on another browser.

Instagram ‘systemically fails’ to protect high-profile women from abuse, study finds

The study analyzed thousands of direct messages of five high-profile women on the platform.
Girl on the beach using phone
The Center for Countering Digital Hate, which analyzed more than 8,700 messages from five high-profile women on the app, said the research shows "Instagram systematically fails to enforce appropriate sanctions and remove those who break its rules."Jasmin Merdan / Getty Images

Instagram “systemically fails” to protect high-profile women from the “epidemic” of misogynistic abuse through direct messages, a study published Wednesday suggests.

The Center for Countering Digital Hate, or CCDH, a nonprofit organization that combats online harassment and misinformation, found that Instagram, which is owned by Meta, the parent company of Facebook, has been "negligent" in its response to misogynistic harassment.

The CCDH, which analyzed more than 8,700 messages from five high-profile women on the app, said the research shows "Instagram systematically fails to enforce appropriate sanctions and remove those who break its rules."

"Digital spaces provide increasingly important ways to maintain relationships, communicate and build personal brands," CCDH CEO Imran Ahmed said in a statement. "For women, however, the cost of admission to social media is misogynistic abuse and threats sent by abusers with impunity."

Study participants included actor Amber Heard, British broadcaster Rachel Riley, activist Jamie Klingler, journalist Bryony Gordon and the founder of Burnt Roti magazine, Sharan Dhaliwal, who have a total of 4.8 million followers on the platform.

Riley described online abuse as “inevitable” for women.

“For women in the public eye, receiving a constant stream of rude, inappropriate and even abusive messages to your DMs is unfortunately inevitable, and the fact that this happens away from the public view makes it all the more intrusive,” she said.

With access to the participants' direct messages, CCDH researchers reported 253 accounts that sent abusive messages. An audit revealed that at least 227 of the accounts remained active at least a month after the researchers reported them, which means Instagram "failed to act" on almost 90 percent of reports of abuse.

Nine out of 10 Instagram users who sent violent threats to the participants were allowed to remain on the platform, even after they were reported using Instagram's tools. CCDH research also suggests that half of abusers send more abusive messages when they're allowed to remain on the platform.

In an example used in the study, Instagram failed to remove accounts that made death threats to Heard's family, including her infant daughter.

"Instagram has chosen to side with abusers by negligently creating a culture in which abusers expect no consequences — denying women dignity and their ability to use digital spaces without harassment," Ahmed said.

Instagram has chosen to side with abusers by negligently creating a culture in which abusers expect no consequences.

CCDH chief executive IMran ahmed

Meta disagrees with “many” of the CCDH’s conclusions, the company’s head of women’s safety, Cindy Southworth, said in a statement.

“We do agree that the harassment of women is unacceptable,” Southworth said. “That’s why we don’t allow gender-based hate or any threat of sexual violence, and last year we announced stronger protections for female public figures.”

Instagram has acknowledged the harmful effects of viewing abusive messages, and in February 2021 it rolled out stricter measures to punish online harassment. The updates promised to disable accounts that are reported multiple times for sending "violating messages." In April last year, Instagram launched a tool to automatically filter direct message requests flagged as offensive or abusive into a separate hidden folder. It also released the "hidden words" function, which allows users to filter out certain words or phrases.

Despite the updates, the study identified "systemic" flaws in Instagram's direct messaging function that "seriously undermine users' safety."

Instagram users can't report voice messages and are forced to view abusive messages to report them, even if they're sent in "vanish mode." Blocking certain words using the "hidden words" feature is "ineffective at hiding abuse," the report found. Heard and Gordon were unable to retrieve full data downloads, suggesting that users may "face difficulties downloading evidence of abusive messages." That makes reporting harassment to law enforcement even more challenging.

In addition to the systemic flaws, the study also revealed that Instagram failed to act on all 125 recorded examples of image-based sexual abuse, such as unsolicited nude photos. Instagram's direct message function allows anyone to send unsolicited images, voice calls and other forms of harassment "at any time and in large volumes," without consent.

Riley, the broadcaster who participated in the study, pointed out that "anyone can privately send you something that should be illegal."

"If they did that on the street, they'd be arrested," she added.

While users can choose not to open their message request inboxes, some need to use them for networking and business opportunities.

Klingler, the activist who participated in the study, said in the report that she gets important media requests through direct messages.

Heard, meanwhile, renounced using Instagram because of her increased paranoia and "frustration with the lack of recourse."

"Social media is how we connect with one another today and that medium is pretty much off limits to me," she said in the report. "That's the sacrifice I made, the compromise, the deal I made for my mental health."