A popular Hindi film actress appears on an Instagram short video, and as she looks straight into the camera, her hands are made to undress her, and make suggestive gestures.
In the crowded scroll of Instagram Reels and X timelines, a new kind of deception is spreading fast – and getting harder to spot. Artificial Intelligence-generated deepfake clips of prominent Indian actresses are circulating widely, blending in seamlessly with genuine footage to appear disturbingly real. As principles of privacy and consent are erased by a few lines of code, the consequences are painful.
On October 22, the Central government proposed rules that would make labelling of AI-generated content mandatory on social media platforms. Users will also have to declare whether the uploaded material is “synthetically generated information”. This comes as AI-generated deepfakes take the entertainment world by storm, with Hrithik Roshan being the latest among several prominent actors to file cases to protect their “personality rights”.
The Indian Express found several accounts on X and Instagram that routinely share deepfake videos of celebrities — predominantly women. Often, these clips carried no AI disclaimers or labels. They were shared by accounts with tens of thousands of followers and had racked up hundreds of thousands of views, even as the platforms hosting them continued to profit from the engagement. Names of the handles and impacted actors have been withheld to avoid directing more traffic toward the digitally altered videos.
Responsibility of platforms
“While platforms enjoy safe harbour protections, it comes with a price, which includes implementation of proactive and preventive measures to protect users and to act expeditiously when violations are identified… the very tech used to violate rights i.e. AI can be deployed to proactively find violative manipulated imagery,” said NS Nappinai, Senior Advocate at the Supreme Court of India and founder of Cyber Saathi, a non-profit that advocates for online safety.
A deepfake is a video that has been digitally altered, typically to spread false information. As with most technologies, the misuse of AI through deepfakes has a deeply gendered slant. The victims of such harm are typically women. While it is difficult to ascertain the amount of deepfake imagery online, a report last year suggested that 84 per cent of social media influencers are victims of deepfake pornography, and nearly 90 per cent of these were women.
Origins of deepfakes
Morphing women’s faces to make them appear in pornographic videos has been around for years. What was possibly the first time the technology was used for such a purpose came in 2017, when Hollywood actress Gal Gadot’s face was superimposed on that of a porn performer.
Story continues below this ad
In India, the issue first surfaced in 2023, when a deepfake video of actor Rashmika Mandanna entering an elevator went viral. Soon after, Prime Minister Narendra Modi had called deepfakes a new “crisis”.
Non-consensual imagery such as this is among the key reasons the Indian government is moving to introduce legislation mandating AI content labels on social media.
IT Ministry warning
In an explanatory note released on October 22, the IT Ministry said: “Recent incidents of deepfake audio, videos and synthetic media going viral on social platforms have demonstrated the potential of generative AI to create convincing falsehoods — depicting individuals in acts or statements they never made. Such content can be weaponised to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.”
In her petition before the Delhi High Court earlier this week, actor Aishwarya Rai claimed that, among other things, AI-generated pornographic and intimate visuals by superimposing her face were being circulated. The court eventually granted her protection.
Story continues below this ad
On X and Instagram, while such pornography may be censored, digitally altered videos of women shown making subtle lewd gestures are being algorithmically served to audiences. With seemingly lax takedown enforcement, action usually happens only when such videos are reported — by which time they have already racked up hundreds of thousands of views.
Instagram and X did not respond to a detailed questionnaire until publication.
On their part, companies such as Meta – which owns Instagram – and Google already have some form of AI labelling on their platforms, asking creators at the time of uploading whether the content was made using AI. On Instagram, for instance, Meta applies an ‘AI Info’ label to content that is modified or created using AI, although enforcement remains patchy, as many AI-generated posts still appear without labels.
X’s AI policy says users “may not” share inauthentic media that could cause confusion on public issues, impact public safety, or cause serious harm. The company uses its own technology or third-party reports to determine if the media have been manipulated. But in cases where it cannot reliably determine if content is misleading, “we may not take action.”
Story continues below this ad
At present, most of these measures remain reactive in nature, meaning labels often appear only after a video is flagged — if the creator hasn’t already declared that it was made using AI.
“Marking of AI-generated content as such serves a threshold level purpose of helping users to identify fake from real and platforms need to work towards labelling and watermarking at the stage of dissemination to the public through their platforms,” Nappinai said, even as she raised concerns on whether mere labelling is enough to address the problem. “This (labelling, watermarking) alone would not suffice. We need effectively implemented takedown mechanisms, which also rest on speedy actions by platforms and easily identifiable remedial mechanisms on each platform too. Users should be able to easily identify reporting and remedial options on each platform.”