The significance of the French police’s move to take into custody the head of messaging platform Telegram as he stepped off a private jet near Paris on August 24 goes beyond the arrest itself, and the subsequent diplomatic fracas it set off with Moscow. The action against Russian-born billionaire Pavel Durov could mark the end of an era in which heads of technology companies had relatively little liability for the content on their platforms. There is no real precedent for this type of action — wherein someone in a significant position of power in the tech industry has been arrested over allegations that their encrypted messaging app is complicit in allowing illicit activities by users. Charges against Durov Telegram is alleged to have allowed illicit content linked to drug trafficking, child pornography, violent propaganda, and organised crime. Durov himself is not charged with any of these offences — instead, French prosecutors have pressed charges relating to about a dozen offences related to the app, which claims it has close to 1 billion users, enabling users to facilitate illicit activities and for not cooperating with law enforcement. Analysts say Durov’s arrest serves as a warning to the heads of other tech companies who are seen as dragging their feet on moderating objectionable content. Durov, a French and Emirati dual citizen who was born in St Petersburg to a Russian father and Ukrainian mother, has since been released from custody, but barred from leaving the country. He could face a possible prison sentence in France in the coming months. Russia, the country that Durov was forced to leave in 2014 after ostensibly refusing demands from Moscow to share information on alleged dissidents supporting Ukraine, is now backing him as French authorities begin legal proceedings. Safe harbour rules The legal action initiated by the French authorities against Durov impinges on the protection that is accorded to social media platforms across jurisdictions under a provision known as “safe harbour”. The basic premise of safe harbour protection is: since social media platforms cannot control at the first instance what users post, they should not be held legally liable for any objectionable content that they host, provided they are willing to take down such content when flagged by the government or courts. Since social media platforms are generally understood to be crucial tools of free speech, safe harbour is viewed as a basic tenet of enabling freedom of expression on these platforms. In the United States, this protection is available to social media platforms under Section 230 of the Communications Decency Act. Section 79 of India’s Information Technology Act, 2000 is somewhat similar — it classifies social media platforms as intermediaries and broadly shields them from legal action over the content that users post. A contested area There have been attempts to dilute these rules, primarily by national governments that aim to exert pressure on social media companies over alleged failure to comply with takedown requests. India’s Ministry of Electronics and Information Technology issued notices to YouTube, Telegram, and X in October 2023, asking them to remove all content related to child sexual abuse from their platforms. In India, certain officials from the social media company in question can be legally prosecuted if the platform violates laid-down rules. Under The Information Technology Rules, 2021, social media companies with more than 5 million Indian users have to appoint a chief compliance officer who can be held criminally liable if the platform does not adhere to a takedown request, or violates other norms. However, the government has not so far exercised this power. Under a new framework called the Digital India Bill, which is expected to succeed The Information Technology Act, 2000, the government has considered at various times whether safe harbour protections should be available to social media platforms. There is apprehension that the algorithms these platforms deploy potentially editorialises content on these sites. Free speech experts, however, see safe harbour as a critical tool for free expression online. While there are reservations that free speech on these platforms often gets conflated with advocating for the companies’ interest, it is largely believed in civil society discourse that these platforms are a crucial promoter of free expression.