Opinion Political misinformation is a problem. But asking WhatsApp to risk user privacy is the wrong solution
We are only one step away from opening the floodgates of message traceability in the name of preserving election integrity. Besides being disproportionate, traceability is also unlikely to serve its purpose
The provision is primarily meant to target end-to-end encrypted platforms like WhatsApp. Politics and misinformation make for a heady mix. With elections around the corner, addressing political misinformation will understandably remain a policy priority. A large part of the attention will be directed toward intermediaries — messaging services like WhatsApp and Signal, social media platforms like Facebook and X and video services like YouTube — that mediate the relationship between users and online information.
Today, sophisticated and readily available artificial intelligence tools can be used to generate all kinds of synthetic media — authentic-looking deep fake images, videos and voices of events that never occurred. Whether used voluntarily or maliciously, deep fakes in the electoral context run the risk of misleading users and influencing their actions. This is a cause for concern. But the solution being proposed may be as damaging, if not more, than the problem itself.
As reported in this paper, the central government plans to rely on the controversial Rule 4(2) of the 2021 Information Technology Intermediary Guidelines to counter political deep fakes. The rule demands that all significant social media messaging entities must have the capability to identify the “first originator of the information” on their platform. Originator requests can then be invoked either under a court order or by the government using its powers to intercept, monitor or decrypt information.
The provision is primarily meant to target end-to-end encrypted platforms like WhatsApp. End-to-end encryption ensures that when one user messages another, only those two individuals will have the keys to unlock the message. Neither the company nor a government agency can ordinarily do so. This makes encryption a powerful tool for preserving communications privacy. For the same reason, law enforcement agencies view end-to-end encryption as a threat to their functioning and keep trying to look for ways to undermine it.
Let us use an imperfect analogy from the real world. We know that a portion of the people who leave their houses every day will commit a crime. To check against these crimes, the government wants a tag attached to every occasion a citizen steps out of their house. They are not asking for the contents of the bag that you were carrying at the time. But privacy would already be doomed with the movement tag of each and every citizen being amassed by a corporation and made accessible to the government on demand.
What are the circumstances in which such a demand can be made? The stated purposes under Rule 4(2) are to aid in the prevention and investigation of certain types of serious offences. This includes threats to India’s sovereignty, security of the state, public order and sexual offences with imprisonment of over five years. While on the face of it all these appear to be reasonable grounds, the devil will lie in the implementation details.
First, the listed grounds leave ample scope for interpretation by courts and the government. This is particularly true for maintenance of “public order” that is routinely invoked in a variety of situations. In an insightful report on the use of Section 144 of the Criminal Procedure Code in Delhi, Bhandari and others found public order restrictions imposed in contexts like flying drones, using metallic manjhas for kites and carrying tiffin boxes inside cinemas. It is not hard to imagine why tracing encrypted messages to prevent such instances would be an excessive intrusion.
Second, the “first originator” of a message is not defined. For instance, a person who copies and pastes an existing message instead of using the forwarding function may become a new originator. Unsuspecting users would come under the fold of this provision while sophisticated miscreants can easily evade it by spoofing the identity of another user. Therefore, besides being disproportionate, traceability is also unlikely to serve its purpose.
Third, as seen in the example above, traceability of an identified message is possible only when the logs of the origin of every message are maintained. The footprint of the rule will compromise the privacy of all messaging users in the hope of being able to deter and penalise a few of them. This does not meet the proportionality requirements of the fundamental right to privacy. Experts have also explained the more technical reasons for why traceability techniques will undermine
privacy and not work in practice.
For all these reasons, Rule 4(2) is currently under challenge before courts. In its first application, the Tripura High Court recently stayed an order demanding the origin of a fake resignation letter by the state’s Chief Minister from WhatsApp. This was on the ground that the trial court had not established the threat to public order while making the order. The next incident of political misinformation could attract a different response. We are, therefore, only one step away from opening the floodgates of message traceability in the name of preserving election integrity. A remedy that surpasses the ill and doesn’t even cure it.
The writer is a lawyer and technology policy researcher