IE 11 is not supported. For an optimal experience visit our site on another browser.

Where Do We Draw the Line When It Comes to Free Speech Online?

Terrorist propaganda, fake news stories and people who are just flat out mean seem to be spoiling the fun of the internet now more than ever.
Image: Twitter
Twitter announced on Oct. 24, 2013 that it would price its initial public offering at $17 to $20 a share.Mary Turner / Getty Images file

While the internet has never been a utopia devoid of trolls, some experts say fake news, hate speech, and terrorist propaganda are turning it into a digital cesspool.

In a country that prides itself on its First Amendment, that leaves social media companies with the delicate task of sorting out what's free speech and deciding what they won't tolerate on their platforms.

Related: Protecting Your Internet Presence in the Age of Donald Trump

Last month, members of the Global Network Initiative, including Facebook, Google, and LinkedIn, said they shouldn’t be pressured by governments to change their terms of service or restrict content. But with the rise in fake news and digital terrorism, some experts say it's time for more action.

"Free speech is this mantra people repeat over and over to justify all sorts of terrible things they say," Jen Golbeck, a professor at the University of Maryland's College of Information Studies, told NBC News. "A lot of it is straight-up harassment designed to upset the person who is on the other end."

Chances Are, You've Been Harassed Online

A Pew Research Center survey published in 2014 found that four out of 10 internet users had experienced some degree of online harassment. Of that group, 40 percent of users said they took steps to respond to it, whether by blocking the person, reporting them to the responsible site or even withdrawing online.

Right-wing journalist Milo Yiannopoulos was permanently banned from Twitter for harassing comedian Leslie Jones. Robin Williams' daughter, Zelda, retreated from the service for a while after she was bombarded with abuse following her father's death. And former Reddit CEO Ellen Pao was on the receiving end of vicious attacks and threats after she tried to crack down on hate speech and revenge porn on the site.

Those are just a few of many cases.

Cast member Leslie Jones poses at the premiere of the film "Ghostbusters" in Hollywood, California
The online harassment of "Saturday Night Live" star Leslie Jones become a national story, with the Department of Homeland Security investigating the hacking of her personal website.

When Does Extremism Cross the Line?

When it comes to combating online extremism, President-elect Donald Trump has hinted he would intervene. Trump said during his campaign that he would call on "our most brilliant minds" to help close down parts of the internet in order to combat the Islamic State.

President elect Donald Trump
President-elect Donald Trump has said he would call on "our most brilliant minds" to help close down parts of the internet in order to combat the Islamic State.Carlo Allegri / Reuters

Cutting off parts of the internet has been done. Egypt was able to block most internet access by withdrawing more than 3,500 Border Gateway Protocol routes during the Arab Spring in 2011, according to Renesys, a networking firm. These routes are the path between the Internet Service Provider and the users; so, without them, the internet wouldn't work for people who rely on those paths for service.

Related: Truth and Transparency Take Center Stage at Facebook

Twitter had become one of several online services used by ISIS to spread propaganda and recruit new members. Twitter responded by increasing the staff on its abuse reporting team and leveraging "proprietary spam-fighting tools" that are able to surface accounts that may violate Twitter's policies, the company said.

The effort has paid off, with the company reporting in August it had suspended 235,000 accounts spreading violent extremism over the course of a six month period.

"We have already seen results, including an increase in account suspensions and this type of activity shifting off of Twitter," Twitter's blog post said.

And then there's all of that fake news. Days after Donald Trump was elected president, Facebook CEO Mark Zuckerberg said it was "pretty crazy" to think false news stories could have influenced the election and warned that Facebook "must be extremely cautious about becoming arbiters of truth."

Where Does Free Speech Stop Online?

But expecting to have free speech on a social network is kind of like expecting you can walk into a church service shouting obscenities, Peter Scheer, executive director of the First Amendment Coalition, told NBC News.

"It is one's right to free speech somewhere, but not necessarily anywhere," he said.

That's the precise reason why many people are gravitating to Gab, according to CEO Andrew Torba.

The new social network, which has been called by some "Facebook for the alt-right," has more than 100,000 accounts and twice as many people on its waiting list. Gab has billed itself as a place where people are free to speak their minds.

"The reason for the massive demand is simple: People around the world feel that they cannot speak freely and express themselves online. They've seen censorship at scale, progressive-leaning bias, and recognize the monopoly that Silicon Valley has on information, communication, and news online," Torba told NBC News.

There are still some rules, though. Gab won't tolerate terroristic threats, illegal pornography and posting other people's personal information. Finally, they do add that it would be lovely if everyone tried to be nice to each other.

While Gab has attracted many conservatives who feel their voices are being stifled elsewhere, Gab's team is also quick to point out its diversity. The company's three executives are a conservative Christian, a Muslim Turkish Kurd, and an Indo-Canadian with Hindu beliefs.

"Our policy is to allow free speech within the limitations of the law as a United States corporation. We are aiming to simplify our guidelines even further in the future to express this point," Torba said.

Letting the User Take Control of the Experience

Gab relies on filters, allowing people to choose what they don't want to see. With the user empowered, Torba said "very few people use it [a reporting feature], mainly because they can already control their own experience."

In November, Twitter rolled out a similar feature that allows people to mute keywords and phrases, preventing them from showing up in their notifications.

"Our hateful conduct policy prohibits specific conduct that targets people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease," the company said in a blog post.

"We don’t expect these announcements to suddenly remove abusive conduct from Twitter. No single action by us would do that. Instead, we commit to rapidly improving Twitter based on everything we observe and learn."

Self-Advocacy in the Online Space

When it comes to harassment, "I think that the solution lies in the space of allowing people to control what they see and for platforms to decide what they want to be platforms for," Golbeck said. "Do they want to be a place where it is a cesspool with what people say online?"

As long as the sky is blue, there will most likely still be internet trolls. However, the rash of fake news is a relatively new problem. Golbeck likened it to the 1990s when you could search for something as innocuous as broccoli and get porn sites looking to make money.

Money is, of course, one of the key motivators for many of the fake news sites. Hitting these faux news outlets in the pocketbook may be the best way to stop them in their tracks, Golbeck said.

Related: Twitter Launches New Anti-Troll Safety Committee

"They are going to write whatever they want, and the more people click, the more money they make," she said. "There is a temptation to make an ideological argument, but really they are motivated by the money."

Facebook and Google both made it clear last month they won't allow sites spreading misinformation to run on their ad networks, essentially cutting off their main source of revenue.

Golbeck said it will be "interesting to see how this plays out" and that she's hopeful it "could have a real chilling effect on fake news."

But cutting the purse strings is just one part of the battle against misinformation.

Less than two weeks after the election, with the issue still simmering, Zuckerberg shared a more detailed account of projects he said were already underway to stop these sites.

"The most important thing we can do is improve our ability to classify misinformation. This means better technical systems to detect what people will flag as false before they do it themselves," he said.

He also said he wants to make it easier for people to flag fake stories and that Facebook will focus on "raising the bar" for stories that appear in the News Feed.

While Golbeck is hopeful that Facebook, Twitter, and other platforms will continue to make progress with an approach of using algorithms and humans to monitor troubling messages, she pointed out that the problem will continue on in some other corner of the internet, and there's likely little we can do about it.

"They aren't mainstream, but you can get that stuff up if you want to. The question is, 'What kind of audience do they get?'" she said. "There are plenty of platforms and places that just don't care what people are posting."