Premium

From Google to Meta and X: what tech giants told Central panel on tackling deepfakes

Last year, the Centre set up a committee to examine issues related to deepfakes, following an order from the Delhi High Court.

X, google,metaTech giants Google, Meta, and X discuss AI and deepfake policies with Indian government.

As India examines issues about deepfakes and emerging technologies, at least three tech giants — Google, Meta, and X — part of a stakeholder consultation meeting in January told the Union Government that they have several policies to tackle manipulated media.

Google and Meta indicated they already have labelling or disclosure policies for AI, deepfake or synthetic content. When it comes to users flagging use of their personas in manipulated media, only Google has a process in place, while Meta is “working on” protecting “celebrity personas”.

However, X emphasised that “not all Al content is deceptive in nature” and urged that “it is important to draw that distinction going forward.”

Story continues below this ad

In November 2024, the Ministry of Electronics and Information Technology (MeitY) set up a nine-member committee to examine issues of deepfakes, following an order from the Delhi High Court. The committee held a consultation meeting with technology giants and policy and legal stakeholders on January 21. The stakeholders pressed for regulation around “mandatory AI content disclosure”, labelling standards and grievance redressal mechanisms, with a caveat that the emphasis should be on malicious actors rather than on creative uses of deepfake technology.

Two representatives from Google, who were present at the consultation meeting, told the committee that they have had a policy for deepfakes since November 2023, and use artificial intelligence (AI) to take down manipulative content intended to cause harm. Google said as per its policy on deepfakes, “they ask creators to disclose synthetic content and provide a label”, and also have a process for “users to claim they are being used to create deepfake so that it can be taken down if their persona is being used.”

Similarly, Meta, which launched its AI labelling policy in April 2024, said “it allows users to disclose when they upload Al content,” including for ads, where users would know if it has digitally altered material, and many of their policies are technology neutral, that is it does not matter whether the alteration is specifically a deepfake or not. However, the Meta representative told the committee they are “working on protecting celebrity personas.”

X also briefed the committee, saying it has a “synthetic and manipulated media policy” where “content which are deceptive in nature are taken down”. However, it stated that for certain posts to be labelled, they should be “extremely deceptive and harmful”. X also said that “not all AI content is deceptive in nature”, and that “it is important to draw that distinction going forward.”

Story continues below this ad

In the next three months, the MeitY-constituted committee is expected to complete its consultation with stakeholders, including victims of deepfakes.

google, meta, X Tech giants: Where do they stand on their emerging tech policies and stance

 

Stay updated with the latest - Click here to follow us on Instagram

Latest Comment
Post Comment
Read Comments
Advertisement
Advertisement
Loading Taboola...
Advertisement