What if YouTube stopped making recommendations?
What if a state official in Texas or Florida required Instagram to not remove vaccine misinformation that’s against the app’s rules?
Or what if TikTok remade its “For You” tab so that content moderators needed to OK videos before letting them appear?
The Supreme Court this week opened the door to radically different ways of thinking about social media and the internet. The court is poised to hear as many as three cases this term about the legal protections that social media companies have used to become industry behemoths, and about the freewheeling latitude the companies now have over online speech, entertainment and information.
Its rulings could be the start of a new reality on the internet, one where platforms are much more cautious about the content they decide to push out to billions of people each day. Alternatively, the court could also create a situation in which tech companies have little power to moderate what users post, rolling back years of efforts to limit the reach of misinformation, abuse and hate speech.
The result could make parts of the internet unrecognizable, as certain voices get louder or quieter and information spreads in different ways.
“The key to the future of the internet is being able to strike that balance between preserving that participatory nature and increasing access to good information,” said Robyn Caplan, a senior researcher at Data & Society, a nonprofit organization that studies the internet.
At issue in one case that the court agreed to hear is “targeted recommendations,” the suggestions that services make to keep people scrolling, clicking, swiping and watching. Tech companies usually can’t be sued merely for allowing people to post problematic content, but in the coming months, the court will consider whether that immunity extends to posts that the companies themselves recommend.
A second case involving Twitter asks how aggressive tech companies need to be in preventing terrorists from using their services, and a third case yet to be accepted for argument may center on state laws in Texas and Florida that bar tech companies from taking down large swaths of material.
The Supreme Court’s decision to hear the “targeted recommendations” case landed like a bombshell in the tech industry Monday because the high court has never fully considered the question of when companies can be sued for material that others post on online services. Lower courts have repeatedly found companies immune in nearly all cases because of a 1996 federal law, Section 230 of the Communications Decency Act.
The recommendations case involves videos on YouTube about the Islamic State terrorist group, but the outcome could affect a wide array of tech companies depending on how the court rules later this year or next.
“They’re going to see this case as potentially an existential threat,” said Ryan Calo, a University of Washington law professor.
If tech companies lose immunity for recommended posts, the companies that rely on unvetted user-generated content such as Instagram and TikTok may then need to rethink how they connect people with content.
“At a minimum, they’re going to have to be much, much more careful about what they let on their platform, or much more careful about what they let their recommendation engines serve up for people,” Calo said. (A colleague of Calo’s brought the lawsuit at issue, though Calo is not involved in the case.)
The two cases the Supreme Court has agreed to hear, and the third likely on its way, present a test of the legal and political might of the tech industry, which has faced increased scrutiny in Washington from lawmakers and regulators but has largely fought off major threats to its sizable profits and influence.
In other words, the court may rein in Big Tech in a way Congress hasn’t chosen to.
“What this might do is put more pressure on platforms to give users more transparency over how the recommender system works, and then control over it,” said Brandie Nonnecke, who researches tech companies as the founding director of the CITRIS Policy Lab at the University of California, Berkeley.
“These are largely unchecked media systems that are providing content to people in ways that you and I don’t understand,” she said.
The Supreme Court’s ruling on targeted recommendations won’t necessarily affect online services that make recommendations but don’t allow user-generated content, such as Netflix or Spotify.
The immunity granted by lower courts under Section 230 has helped make possible a whole generation of internet companies, from review sites such as Yelp and Glassdoor, to news websites that allow user comments, to social media companies that let people post more or less freely. Companies can leave up or take down individual posts largely without fear of lawsuits over defamation or invasion of privacy.
Jeff Kosseff, author of a book on Section 230, “The Twenty-Six Words That Created the Internet,” said the outcome of the Supreme Court case was impossible to predict but that smaller companies with fewer resources had the most to lose.
“If the scope of Section 230 were substantially narrowed, I think you would see especially smaller platforms really second-guessing whether they want to take the risk of allowing user content,” he said.
“If you’re a hyper-local news site that allows comments on your stories, and you may not even have libel insurance, you’re going to think twice about allowing comments,” he said.
The idea of stripping tech companies of immunity for “algorithmic amplification” has bounced around for years. Roger McNamee, a venture capitalist and former Facebook investor, proposed it in 2020. Two members of Congress put the idea into legislation the same year.
When the court hears arguments in the case, it will do so in the context of a far different internet than the one that existed in 1996. Back then, the relatively few people who used the internet often did so via dial-up modems, and there were few if any recommendation engines on websites.
Tech companies were also in their infancy. Now, U.S. tech companies are among the most valuable corporations on the planet.
“In the world of today, the internet’s going to do just fine, and it no longer needs this protection,” said Mary Graw Leary, a law professor at The Catholic University of America
Leary said the Supreme Court should consider the broader context of the Communications Decency Act, which also included anti-obscenity provisions designed to shield children from pornography.
“As industries grow and become more and more powerful, and we become more and more aware of the scope of harm that industries can create, there’s more of a need for regulation,” she said.