The tech giant further said that it is piloting a new feature aimed at combating digital arrest scams. (AP photo)The deployment of privacy-enhancing technologies (PETs) such as federated learning, homomorphic encryption, and differential privacy is key to implementing the Digital Personal Data Protection (DPDP) Act, 2023, according to a senior IT Ministry official who was involved in drafting the law.
Likening the journey of India’s data protection legislation to purchasing a car, Vikash Chourasia, Scientist D of the IT Ministry’s Cyber Laws and Data Governance Group, said, “Now we have to drive the car.” “That’s where we feel that PETs are the core agents for us to deliver the implementation of the DPDP. And that is where we look forward to partnering with institutions and academic groups,” he added.
Chourasia was speaking on a panel featuring senior government officials and industry representatives at an AI pre-summit event organised by Google India in New Delhi on Thursday, November 20. It is one of several such events being held in the buildup to the India AI Impact Summit that is scheduled to be held in February 2026. It is set to be the first large-scale AI summit to be hosted in the global South, after previous editions were held in Bletchley Park, Seoul, and Paris.
“I believe privacy is a problem which probably could be resolved at the engineering level more than at the user level. Because the user is the end consumer,” he said.
Chourasia also mentioned holding back-to-back meetings with stakeholders such as IIT Madras’ Centre for Responsible AI (CeRAI) in Chennai next month, along with training sessions for developers in India.
His remarks come just days after the IT Ministry notified the DPDP Rules, 2025, which paves the way for India to have a functional privacy law. However, only certain provisions such as the Right to Information (RTI) Act amendment and establishment of the Data Protection Board (DPB) of India is currently in force.
Other provisions related to safeguarding citizens such as the requirement for entities to seek informed consent from users before processing their personal data and to notify data breaches to users, will only be operationalised after 18 months. Although, this compliance timeline may vary for big tech companies and start-ups.
Meanwhile, at Thursday’s event, Google highlighted its multi-pronged approach to harness AI to protect Indian users from online harm. It also announced a slew of upgrades across its scam protection product portfolio and an expansion of key partnerships with entities such as CeRAI and the CyberPeace Foundation.
As part of its efforts to combat online scams in the country, Google India announced that it is rolling out a scam detection feature on Pixel phones that uses Gemini Nano’s capabilities to analyse calls in real time and flag potential scams entirely on-device, without recording audio or transcripts or sending data to Google.
“The feature is off by default, applies only to calls from unknown numbers (not saved contacts), plays a beep to notify participants, and can be turned off by the user at any time,” Google said.
The tech giant further said that it is piloting a new feature aimed at combating digital arrest scams, which has seen a recent surge in India. Android 11+ users will see a prominent alert when they try to share their screens with an unknown contact as part of the feature, Google said.
Users have a one-tap option to end the call and stop screen sharing. It has been developed in partnership with fintech players such as Google Play, Navi, and Paytm.
Google also unveiled a new Android-based security protocol called Enhanced Phone Number Verification (ePNV) that replaces SMS OTP flows with a secure, consented, SIM-based check.
Additionally, the company said that Google Pay displays over one million warnings every week for fraudulent transactions. Google Play Protect has also successfully blocked over 115 million attempts to install sideloaded apps that use sensitive permissions that are frequently abused for financial fraud in India.