skip to content
Advertisement
Premium

Focus on content disclosure, labelling: Govt report to Delhi HC on ‘deepfakes’

Prior to the stakeholders’ meeting, the MEITY’s panel looking at deepfake issues had opined there should be “mandatory intermediaries’ compliance”.

deepfake, deepfakes, government delhi hc deepfake, delhi hc, delhi high court, delhi high court deepfake, Panel, deepfake issue, 3 months to submit report, news, indian express,In line with DAU’s concerns, the panel has noted in its minutes of the meeting that the “need for collaboration and standards was brought forward to create standardised detection frameworks and regulatory norms”.

DEEPFAKES TARGETING women during state elections, a rise in scam content using AI, better enforcement rather than new laws, and lack of uniform definition for “deepfake” — these are some of the key concerns raised by stakeholders, according to a status report submitted by the Ministry of Electronics and Information Technology (MeitY) to the Delhi High Court on Monday.

According to the report, a nine-member committee set up by MeitY in November 2024 met a month later. The committee then met technology and policy stakeholders on January 21 this year. The stakeholders pressed for mandatory regulation around AI content disclosure with a caveat that the emphasis should be on malicious actors rather than on creative uses of deepfake technology.

The minutes of the meetings, made part of the status report, notes that various stakeholders in attendance emphasised that “…there should be regulation around mandatory Al content disclosure, labeling standards, and grievance redressal mechanisms, while giving emphasis on malicious actors rather than benign or creative uses of deepfake technology”.

Story continues below this ad

Prior to the stakeholders’ meeting, the MEITY’s panel looking at deepfake issues had opined there should be “mandatory intermediaries’ compliance”.

Intermediary liability frameworks determine the extent to which intermediaries can be held liable for content on their platforms. The frameworks range from holding intermediaries entirely responsible for the content posted on their platform to complete immunity.

In the January 21 meeting, the report notes, the stakeholders raised concerns “about over-reliance on intermediary liability frameworks for Al-generated content regulation”, and emphasised on “improving capacity of investigative and enforcement agencies rather than introducing new regulations”.

Explained

Worry about bad actors

THERE is realisation among stakeholders that deepfakes need to be tackled, and through increased oversight on malicious actors. The minutes suggest companies cannot shy away from taking any liability.

A representative from X present at the meeting said, “Not all Al content is deceptive in nature. It is important to draw that distinction going forward.”

Story continues below this ad

Deepfakes Analysis Unit (DAU), an initiative of the Meta-supported Misinformation Combat Alliance (MCA), flagged to two specific election-related trends during the meeting: one, during state elections, DAU saw deepfakes targeted towards women, and two, scam content using AI is on the rise, a trend which has increased post elections. DAU supports media organisations in detecting deepfakes.

It also flagged that audio detection tends to be challenging when dealing with deepfakes, and highlighted that a challenge is that there is “no consensus on the definition of deepfake”.

In line with DAU’s concerns, the panel has noted in its minutes of the meeting that the “need for collaboration and standards was brought forward to create standardised detection frameworks and regulatory norms”.

One of the points discussed in the first meeting included the panel requesting the Indian Cyber Crime Coordination Centre (I4C) to collect details of the deepfake cases registered and investigated by the law enforcement agencies (LEAs) across the country.

Story continues below this ad

In the first meeting in December, the committee decided that the “proposed solutions” for the issues pertaining to deepfakes “should also include mandatory intermediaries’ compliance with awareness initiatives and leveraging platforms like YouTube for targeted campaigns”.

The committee has now sought three months from the High Court to complete its consultation with stakeholders, including victims of deepfakes. In the status report, MEITY stated that it is yet to consult with victims of deepfake and that it “has been working with the Ministry of Information and Broadcasting to get written inputs from victims of deep fakes”.

An attendee of the January meeting told The Indian Express on condition of anonymity that the discussion hinged broadly on the “reactive measures in case of harm” rather than “preventive measures”.

Stay updated with the latest - Click here to follow us on Instagram

Latest Comment
Post Comment
Read Comments
Advertisement

You May Like

Advertisement
Advertisement