How the Supreme Court could change the internet as we know it

How the Supreme Court could change the internet as we know it

The Supreme Court this week opened the door to radically different ways of thinking about social media and the internet.


What if YouTube stopped making recommendations?

What if a state official in Texas or Florida required Instagram to not remove vaccine misinformation that’s against the app’s rules?

Or what if TikTok remade its “For You” tab so that content moderators needed to OK videos before letting them appear?

Click to continue reading

The Supreme Court this week opened the door to radically different ways of thinking about social media and the internet. The court is poised to hear as many as three cases this term about the legal protections that social media companies have used to become industry behemoths, and about the freewheeling latitude the companies now have over online speech, entertainment and information.

Its rulings could be the start of a new reality on the internet, one where platforms are much more cautious about the content they decide to push out to billions of people each day. Alternatively, the court could also create a situation in which tech companies have little power to moderate what users post, rolling back years of efforts to limit the reach of misinformation, abuse and hate speech.

The result could make parts of the internet unrecognizable, as certain voices get louder or quieter and information spreads in different ways.

“The key to the future of the internet is being able to strike that balance between preserving that participatory nature and increasing access to good information,” said Robyn Caplan, a senior researcher at Data & Society, a nonprofit organization that studies the internet.

At issue in one case that the court agreed to hear is “targeted recommendations,” the suggestions that services make to keep people scrolling, clicking, swiping and watching. Tech companies usually can’t be sued merely for allowing people to post problematic content, but in the coming months, the court will consider whether that immunity extends to posts that the companies themselves recommend.

A second case involving Twitter asks how aggressive tech companies need to be in preventing terrorists from using their services, and a third case yet to be accepted for argument may center on state laws in Texas and Florida that bar tech companies from taking down large swaths of material.

The Supreme Court’s decision to hear the “targeted recommendations” case landed like a bombshell in the tech industry Monday because the high court has never fully considered the question of when companies can be sued for material that others post on online services. Lower courts have repeatedly found companies immune in nearly all cases because of a 1996 federal law, Section 230 of the Communications Decency Act.

The recommendations case involves videos on YouTube about the Islamic State terrorist group, but the outcome could affect a wide array of tech companies depending on how the court rules later this year or next.

“They’re going to see this case as potentially an existential threat,” said Ryan Calo, a University of Washington law professor.

If tech companies lose immunity for recommended posts, the companies that rely on unvetted user-generated content such as Instagram and TikTok may then need to rethink how they connect people with content.

“At a minimum, they’re going to have to be much, much more careful about what they let on their platform, or much more careful about what they let their recommendation engines serve up for people,” Calo said. (A colleague of Calo’s brought the lawsuit at issue, though Calo is not involved in the case.)

The two cases the Supreme Court has agreed to hear, and the third likely on its way, present a test of the legal and political might of the tech industry, which has faced increased scrutiny in Washington from lawmakers and regulators but has largely fought off major threats to its sizable profits and influence.