IE 11 is not supported. For an optimal experience visit our site on another browser.

Twitter launches 'safety mode' to block accounts for harmful language

The automated system is meant to limit harassment that has been an ongoing problem for women and minorities on the platform.

Twitter will launch a safety feature that would allow users to temporarily block accounts for seven days for using harmful language or sending uninvited replies, the social media platform said on Wednesday.

Once the Safety Mode is turned on, Twitter’s systems will check the tweet content to assess the likelihood of a negative engagement and the relationship between the author and replier.

Accounts frequently interacted with will not be auto-blocked, the company said, as it takes existing relationships into account.

Twitter has earlier taken several steps to address harassment on its site, which often occurs in unsolicited replies targeting women and minorities.

“We want people on Twitter to enjoy healthy conversations, so we’re limiting overwhelming and unwelcome interactions that can interrupt those conversations,” the company said.

Safety Mode can be turned on under settings and will be available to a small feedback group on iOS, Android and Twitter.com, beginning with accounts that have English-language settings enabled, Twitter said.