
The Ministry of Electronics & Information Technology (MeitY) has proposed a regulation that would require internet companies to remove content flagged as “fake” by the Indian Government or face legal liability. The proposal would require companies like Facebook and YouTube to “make reasonable efforts” to remove content identified as “fake” or “false” by the government’s Press Information Bureau (PIB) or any Union Government department. Even though the phrase “make reasonable efforts” may not necessitate the proactive removal of all content declared “false”, under the current proposal, internet platforms would be required to remove content once informed of its “fake” nature by the government. Failure to comply would result in the loss of crucial statutory immunity (or “safe harbour”), leaving the platform at risk of being sued for the content.
Imagine that a citizen’s social media post about the poor condition of a national highway goes viral. The PIB or maybe even the Ministry of Road Transport and Highways declares the post is “fake” and notifies internet platforms, resulting in the post’s removal. Granting government bodies the authority to remove “fake” content in this manner raises serious constitutional concerns as it restricts free speech beyond what is constitutionally permitted and bypasses the safeguards against government censorship set out in Section 69A of the IT Act.
MeitY’s proposal also circumvents existing safeguards on the government’s power to block online content. Under Section 69A of the IT Act, the government can only block content online for reasons consistent with Article 19(2) and must follow a specific procedure when doing so. Currently, two writ petitions are pending in the Karnataka and Delhi High Courts arguing that the government regularly flouts these procedures and that stronger protections are needed for users. For instance, both petitions contend that a user must be given a hearing before their content is removed by the government. However, if the current proposal is accepted, the limited substantive and procedural safeguards of Section 69A, and the outcomes of the writ petitions, would become irrelevant. The government would be able to remove content by unilaterally determining it to be “false”.
Some may argue that internet platforms only risk liability if the content they refuse to remove is unlawful, and platforms should not be hosting unlawful content in the first place. However, the current proposal incorrectly equates “falsehood” with unlawfulness. Even if the PIB identifies “false” content that is also unlawful (for example, content that threatens public order), the current proposal lacks any process to scrutinise the government’s determination. Such an approach is incompatible with the rule of law, which is founded on checking government power through meaningful safeguards.
If the government is serious about online safety, it may consider enacting a law specifically addressing the harms caused by misinformation in their relevant contexts (for example, health or election misinformation). However, it must demonstrate why content removal is a necessary and proportionate response to the alleged harms of misinformation (for example, why not focus on increasing media literacy, or hold those spreading misinformation accountable?). If it is restricting free expression, it must do so on grounds specified in the Constitution and must establish safeguards to ensure that the use of government power can be scrutinised.
The writer is project officer, Centre for Communication Governance, National Law University Delhi