Breaking News Emails

Get breaking news alerts and special reports. The news and stories that matter, delivered weekday mornings.
By David Ingram

Governments on at least four continents are stepping up their calls to regulate Facebook, YouTube and other social media companies after years of frustration with how technology companies have managed violent content on their services.

With a wave of new and proposed laws in recent weeks, the governments — not including the United States — are making a sweeping challenge to the freewheeling speech rules of the internet that have opened up politics to new voices but also allowed violent extremist groups to flourish.

Australia this month threatened prison time for technology workers in a law it passed to fight violent imagery online. The British government has proposed a new internet regulator that could fine senior managers if they don’t meet a “duty of care” to users, while Europe has moved forward with proposals for stricter regulation of both terrorist content and copyrighted material.

Singapore is considering a law that would penalize technology companies that publish falsehoods, and internet freedom activists in Thailand say a new internet security law there amounts to “cybermartial law.”

Canada this month added itself to the list of exasperated countries asserting control, warning of possible financial penalties or other regulatory options if social media firms do not do more.

In the U.S., social media companies have been under a near-constant barrage of criticism, including from local and national politicians. This week, representatives for technology companies were called to testify before two congressional hearings with antithetical concerns: one on the rise of white nationalism, and another on the dangers of censorship. And while some politicians have suggested a variety of ways to regulate technology companies, none has offered laws that would force such companies to more aggressively moderate their platform. Such laws would conceivably run afoul of U.S. free speech guarantees, which go further to bar censorship than elsewhere in the world.

While growing public scrutiny of social media companies and a history of light-touch regulation have provided the backdrop for the foreign governments' proposals, free-speech advocates expressed concern that the pendulum could swing dramatically toward heavy-handed censorship.

“Governments have been really jealous in a way of company control of online speech, and they’re pushing back,” David Kaye, the United Nations’ special rapporteur for free expression, told NBC News in a phone interview.

The challenge technology firms face to enforce even their own rules was on display last month when a gunman used Facebook to livestream his attack on two mosques in Christchurch, New Zealand, in which he killed 50 people. Facebook and YouTube said they blocked millions of copies of the graphic video, but couldn’t always keep up with people trying to spread it.

That followed earlier complaints from elected officials and regulators about the spread of terrorism content and the use of social media by anonymous users to meddle in elections.

Changing, fast and slow

The companies and their trade associations are pushing back in each country with an array of arguments, saying they want to work with the authorities and noting the thousands of people they’re hiring to moderate content.

Facebook Chief Operating Officer Sheryl Sandberg said in a phone interview that the social network knows it has to do more to earn back people’s trust after two years of scandals about privacy, violent material and its role in politics.

“What you’ve seen in the last kind of year and a half is we’re running the company very differently,” Sandberg said. “I used to spend more than half my time, the majority of my time, on the growth and the business, and now I spend the majority of my time on protection and security.”

Responding last month to the Christchurch shooting, Sandberg said in a blog post that Facebook was exploring restrictions on who could broadcast live on the service.

But it’s not clear the companies are changing fast enough for many politicians.

“The era of social media firms regulating themselves is over. It’s time to do things differently. It’s time to keep our children safe,” British Prime Minister Theresa May said in a video message posted Monday, the day her government published its regulatory proposal.

The argument goes to the heart of social media’s hugely profitable business model: showing ads alongside content posted at will by mostly nonprofessional users, generally without the companies paying for the content or being liable for it.

Kaye said the proposed laws come as social media companies have shown some success in better regulating their platforms with technology and the willingness to spend money on human moderators. Facebook employs 30,000 people in its safety division.

“The companies have never been better at taking down content,” he said.

But how these systems work is still mostly a secret to the public. Some civil society groups, which for years have been allies of the technology companies in opposing strict regulation, say they also have been frustrated by the companies’ failure to be transparent about their systems.

“Technologically, they’re not able to explain to regulators what algorithms they use, and so of course the regulators are going to be frustrated,” said Lucie Krahulcova, a Melbourne-based policy analyst for Access Now, which promotes free expression online.

Truth and consequences

The idea of using criminal law, not just civil liability, to clamp down on online content is particularly unusual in democratic countries such as Australia. Under the law passed this month by Australia’s parliament, technology companies must “expeditiously” take down “abhorrent violent material” or face enormous fines, and employees face prison terms of up to three years.

The law passed in the span of a couple days with little public debate. Krahulcova said the result was a slapdash measure with vague definitions — but possibly a political winner ahead of Australian elections next month.

“The general public doesn’t have that much empathy for the tech companies, but there’s not also not as much awareness of free speech or the right to data protection,” she said.

Faced with criminal penalties, companies will err on the side of removing content, said Eileen Donahoe, executive director of Stanford University’s Global Digital Policy Incubator.

“There will be so much over-censorship out of fear of having criminal consequences,” she said.

Advocates for free expression said they fear that authoritarian governments will copy laws like Australia’s, using the example of democratic countries as cover for violations of human rights.

“People who are critical of the government are arrested and tried for spreading their political beliefs under the same laws meant to stem the spread of false information on social media,” said Sanja Kelly, director for internet freedom at Freedom House, a nonprofit that advocates for civil liberties worldwide.

Singapore’s law has also caused alarm among free speech advocates who worry that it could be used to expand the government’s ability to target opposing politicians and journalists.

Freedom House warned in a report last year that “digital authoritarianism” was on the rise, citing Chinese censorship, privacy failures by firms such as Facebook, and moves by various countries to restrict speech online under the guise of fighting false news stories.

One way technology companies could sidestep regulations may be to shift their emphasis from public posts toward private, encrypted messages — hidden from government view and from the companies themselves. Last month, Facebook CEO Mark Zuckerberg said his company was planning such a move, though he did not say it was to avoid government oversight and the network’s news feeds aren’t going away soon.

In a report last month, researchers at New York University recommended various steps the companies could take, such as removing any content that is provably false — something that Facebook does now only in cases of imminent harm or voting misinformation.

“The social media platforms spent a long time minimizing and downplaying problems in 2015, 2016, into 2017, and only recently began to acknowledge that the problems existed,” Paul Barrett, the report’s author, said.

“Even today, they’re taking relatively small steps, rather than the sort of dramatic, demonstrative, policy changes that might have caused governments to hold off,” he said.