Feedback
Tech

Will Fear Change the Internet? Self-Policing Has Already Started

Image:

An ISIS militant holds the group's flag as he stands on a tank they captured from Syrian government forces in Qaryatain, central Syria, in an image posted on Aug. 5, 2015, on the Facebook page of Rased News Network, a page affiliated with ISIS militants. Militant website via AP

After the attacks in Paris and San Bernardino, public figures including President Obama, Donald Trump and former Google CEO Eric Schmidt called for regulating online activity.

Could fear of violence change the Internet in drastic ways? And what might a more tightly controlled Internet look like?

One thing is for sure: Trump couldn't close the Internet, even if he did become the next commander-in-chief.

"It's not like the president has a big switch behind his desk, and if he flicks it, the Internet goes out," Jim Lewis, senior fellow at the Center for Strategic and International Studies, told NBC News.

"It's thousands of networks that areindependent and are designed to operate if one goes down. You can't turn off the Internet."

While that may be true, the Internet can be regulated. The pressure is mounting on tech companies to crack down on hate speech and on terrorists who use their services.

Is Donald Trump Playing on Voters’ Fears With Muslim Ban Comments? 3:07

In an address to the nation on Sunday night, Obama urged "high-tech and law enforcement leaders to make it harder for terrorists to use technology to escape from justice."

That was followed by Schmidt, currently executive chairman of Google, who wrote in the New York Times on Monday that we "should build tools to help de-escalate tensions on social media — sort of like spell-checkers, but for hate and harassment."

Then, on Tuesday at a campaign event in South Carolina, Trump talked about stopping groups like ISIS from using social media to communicate.

Related: Social Media Companies Face Uphill Battle in Trying to Keep Terrorists Out

"We have to go see Bill Gates and a lot of different people that really understand what's happening," he said. "We have to talk to them about, maybe in certain areas, closing that Internet up in some way."

Democratic Sen. Dianne Feinstein and Republican Sen. Richard Burr introduced a bill on Tuesday that would require tech companies to alert law enforcement of online terrorist activity as soon as they become aware of it.

What this could mean for the Internet

Threats of violence and terrorism already violate the terms of service for major tech companies like Facebook, Google and Twitter. But could we see a new age of increased censorship after the recent terror attacks?

In the latest "Freedom on the Net" report — which takes into account surveillance laws, content removal requests, and arrests over sharing critical views online — put out by Freedom House, the U.S. ranked sixth in the world. That is below countries such as Germany and Canada, but above most nations, including France, the U.K., Argentina and more. (China ranked last, below Syria and Iran).

The reason? The First Amendment makes it very difficult for lawmakers to ban speech online, even if it's hateful. The U.S. government is more restricted than many of its European counterparts when it comes to restricting what people say on social media.

Instead of Washington cracking down, it's the tech companies that will take action against hate speech and content that's sympathetic to terrorist causes, predicted several experts who talked to NBC News.

In years past, social media sites were wary of censoring content. But they are changing their tune as users have tired of material they deem vulgar and offensive, according to Susan Benesch of the Berkman Center for Internet and Society at Harvard University.

"Five years ago, these companies would have said, 'Content isn't our responsibility. We just control the pipes, users provide the content,'" Benesch said.

In recent years, she said, there has been a shift. Complaints about graphic and offensive content have encouraged tech companies to tighten their standards.

"People are becoming more painfully aware of the horrible sentiments and ideas on the Internet," she said.

Thus, when sites like Reddit, Google and Microsoft take steps to remove "revenge porn," as they did earlier this year,they are commended instead of attacked by users. Many websites have also made comments more civil by forcing people to use their real names by signing in through Facebook.

Pair that with a public that is increasingly willing to shame people who are seen as saying offensive things — as with the Tumblr blog "Racists Getting Fired" — and you have an environment where much of the Web is self-policing.

And what about Schmidt's idea about a hate spell-checker? Benesch said that filtering tools on social media often block innocent uses of a word, while not stopping people who want to spread vitriolic sentiments, or in China's case, use euphemisms to criticize the government.

She expects social media companies to commit more resources to keeping themselves free of hateful content and to spend more time publicizing community norms.

It's not clear how safe data will be on encrypted messaging apps. But for average users, the Internet might not look or feel too different, just sanitized, as lawmakers and users put pressure on tech companies to monitor and remove offensive content.

"The Internet is moving away from a place where anyone can say anything and find any information they want," Jennifer Granick, director of Civil Liberties at the Stanford Center for Internet and Society, told NBC News. "My hypothesis is that the Internet will become more like TV."

It's doubtful that the U.S. would ever block entire services, like Turkey and China have done in the past, but that doesn't mean we shouldn't be wary of government and tech industry overreach.

"No matter how scared we are of ISIS," Benesch said, "we, as Americans who love the First Amendment, should be frightened of thousands of people scrutinizing everything we post and censoring it."