Written by Jahnabi Mitra and Megha Garg
Everyone hits that age, when you are constantly hit with the refrain, “Your biological clock is ticking.” It is brought to our attention at parties and gatherings. And then, even if you are a novice at online dating, you will reluctantly download dating and matrimonial apps. But what is appalling in that process is the amount of information you must upload. Matrimonial apps win on the array of information collected. But dating aren’t so different – from selfie verification to adding videos and audio, only to prove that you are who you claim to be, they too gather data. After reading the recent Digital Personal Data Protection Act, 2023, your paranoia will know no bounds.
In the rapidly evolving landscape of online matrimonial and dating applications, users are not only sharing their basic information such as desires, gender, location, age, sexuality, religion, eating, and drinking habits but are also unwittingly contributing to the development of deepfake technologies.
Although the term “deepfake” resurfaced in our public consciousness late last year with the Rashmika Mandanna case, there’s a long history of deepfakes and their development before we resorted to AI. Deepfakes are realistic-looking fake videos, images and audio created using digital software. They involve a combination of images, videos or audio through computer-generated artificial videos to depict events, statements, or actions that never occurred. To create a deepfake video, the system requires enough data to comprehend what the subject looks like from all angles and under all lighting conditions. The same applies to voice replication, which requires data on the pitch, bass, tonality, and tempo of your voice.
The most significant threat from deepfakes is the possibility of identity theft and financial fraud. Criminals can use deepfake technology to impersonate a victim’s voice or likeness to access financial accounts, apply for loans/credit cards, or transfer funds. Think of all the other online platforms where voice or facial recognition is a step for authenticity verification. Take, for instance, a new trend in cybercrime called whale phishing, also known as “CEO fraud”, where cyber attackers pose as heads of companies and manipulate senior officials to transfer large amounts of money to fraudulent accounts. These processes become seamless with the surge of deepfake videos.
Let’s go back to our dating profile, where we granted access and usage rights to the dating application and whoever else is mentioned in the fine print. None of us ever read the terms and conditions. Now our voice, gestures, posture, facial structure and ratio, and whatever else is needed for the generation of a deepfake is served to the machine learning algorithm on a platter.
Privacy policies reveal that matrimonial, dating and other applications routinely share personal data with third parties for purposes of moderation, marketing, legal compliance, and other undisclosed reasons. This means that the intimate details users entrust to these platforms may find their way into various hands, raising questions about the extent of control individuals have over their own data. Clause 3(c)(ii) of the Data Protection Act states that the provisions of this Act will not apply to any personal data that is made public by the data principal or disclosed by another person to comply with a law. To explain the provision, the Act gave an example of content posted by the data principal: You and I, on social media platforms, will be considered public data available for use by AI companies as part of their training sets.
The discourse surrounding deepfakes typically narrows its focus to sexual content or political misinformation, inadvertently overlooking a crucial aspect: The origin of the data used to create them. Is the data solely sourced from YouTube, other social media platforms or the apps on our phones? What are the potential risks for you and me, aside from the obvious fear of pornographic content being replicated?
Will matrimonial and dating applications fall under social media platforms, and will the government allow our data from them to be used for training AI? And if we acknowledge we aren’t safe given all these considerations, what is the future of protection against this mass data breach?
Mitra is a Phd research scholar in Psychosocial Clinical Studies, Ambedkar University, Delhi. Garg is a marketing consultant