Journalism of Courage
Advertisement
Premium

How the US Supreme Court could change the way social media works in the country

Over the next month, the SCOTUS will deal with cases that might alter the US's “hands-off” policy when it comes to policing social media.

Image of a phonescreen with logos of Facebook, Twitter, etcSCOTUS can fundamentally alter how social media functions in the US, ushering in an era of government interference. (File)
Listen to this article Your browser does not support the audio element.

In the US, social media platforms like Twitter, Facebook and Instagram work on two basic principles. First, platforms themselves decide what content to keep online and what to take down, without any government oversight. Second, platforms are not liable for the content that users post on them. Following these basic principles, big tech companies have been able to operate without government censorship as well as the risk of being sued.

However, this system might be up for a radical upheaval as the Supreme Court of the US (SCOTUS) is poised to reconsider US doctrine with regards to social media over the next few months. On Friday, the SCOTUS discussed whether to hear two cases that challenge laws in Texas and Florida, barring online platforms from taking down certain political content. Next month, the court is scheduled to hear a case that questions Section 230, a 1996 statute that protects the platforms from liability for the content posted by their users, The New York Times reported.

Experts say that these cases can eventually alter the US’s current “hands off” approach when it comes to policing online speech, affecting all social media companies, from TikTok to Twitter.

Reluctance of US lawmakers to act

As social media has grown in terms of reach, so has its power – to influence real action. Misinformation and hate speech are hot button topics, the effects of which often go well beyond the virtual world. 

In the US too, social media has come under the microscope. From affecting the outcome of the 2016 elections to helping organise the January 6 insurrection, from Covid 19 related misinformation to radicalising the Charleston Church shooter, social media has played a big role in the country’s society and politics. 

However, despite mounting evidence of social media’s real world impact, US lawmakers have been reluctant to act, largely due to the US’s sacrosanct First Amendment that enshrines Freedom of Speech as arguably the bedrock of US democracy. 

Rather, US’s focus has been to make online platforms “self-regulate,” i.e. control the content themselves. Depending on party affiliation, lawmakers have varying stances on what a social media platform needs to regulate with Democrats calling for companies to regulate a larger variety of content while Republicans denounce even this, with a few notable exceptions.

Story continues below this ad

The Florida and Texas cases

Both Florida and Texas passed laws prohibiting social networks from taking down certain content after Twitter and Facebook banned President Donald Trump following the January 6 insurrection in the US Capitol.

In 2021, NetChoice and CCIA, groups funded by various tech companies, took the matter to the courts, arguing that it is well within a company’s First Amendment rights to decide what to post on its platform. However, while in both states the courts initially ruled in favour of social media companies, the US Court of Appeals for the Fifth Circuit upheld Texas’ law, rejecting “the idea that corporations have a freewheeling First Amendment right to censor what people say.”

Ttwo federal courts disagreeing on the issue has put pressure on the SCOTUS to intervene. Any decision that even remotely legitimises Texas’s law will have far-reaching consequences, opening social media up to government intervention in the US.

Rethinking Section 230

The Supreme Court case that challenges Section 230 of the Communications Decency Act has potentially far-reaching consequences as well. Section 230 shields online platforms from lawsuits over most content posted by their users. For years, this section has been cited by courts while dismissing claims against platforms like YouTube and Facebook for platforming hate speech or misinformation. 

Story continues below this ad

However, on February 21, the Supreme Court will hear the case of Gonzalez v. Google. Brought by the family of an American killed in Paris during an attack by followers of the Islamic State, the lawsuit says that YouTube supported terrorism when its algorithms recommended Islamic State videos to users, reported NYT. The suit argues that recommendations can count as content produced by the platform itself, removing it from the protection of Section 230.

While the petitioners argue that the scope of this lawsuit is “fairly narrow” and that it would not have a radical impact on the larger social media space, Halimah DeLaine Prado, Google’s general counsel, disagreed. “Any negative ruling in this case, narrow or otherwise, is going to fundamentally change how the internet works,” since it could result in the removal of recommendation algorithms that are “integral” to the web, she said.

(with inputs from the New York Times)

Tags:
  • Explained Sci-Tech Express Explained Social media US Supreme Court
Edition
Install the Express App for
a better experience
Featured
Trending Topics
News
Multimedia
Follow Us
Tavleen Singh writesWhat is it that Pakistan hates so much about Modi’s ‘new India’
X