Written by Isha Prakash and Anusha Shah
In a world where “seeing is believing”’ is an accepted adage, the internet seeks to challenge even this notion. A recent video featuring US Vice President Kamala Harris purportedly speaking gibberish has gone viral on social media. Closer home, an AI-generated image of PM Modi looking through a microscope incorrectly was also widely shared and mocked by netizens. Both of these are instances of “deepfakes”, a concept that gained virality in 2017. It is an amalgamation of the words “deep learning” and “fake” and it means fabricated videos generated from existing face-swapping techniques and technology.
There exists a plethora of known methods for deepfake creation. The most common form relies on the use of neural networks, utilising deep learning and Artificial Intelligence (AI) to employ the superimposition of faces onto other media. The democratisation of deepfake generation technology is a game-changer due to its ability to mass-produce fabricated videos with remarkable accuracy. On most apps available today, generating a deepfake simply requires a main video to use as the basis of the deepfake and then a few short clips or even a photograph of the person(s) whose face(s) you want to use in the main video.
The technology involved in creating deepfakes holds promise for various domains, including entertainment, education and healthcare. However, one must also acknowledge the associated risks, particularly the alarming threat it poses to the personal security and privacy of millions through audio-visual manipulation tactics. This includes the usage of deepfakes for purposes of identity theft and synthetic pornography. Deepfake pornography is almost always non-consensual, involving the artificial synthesis of explicit videos featuring celebrities or personal acquaintances. Another equally worrying ramification is the creation and dissemination of morphed videos of elected representatives and public figures in a political sphere already reeling from an avalanche of disinformation and polarisation.
To comprehend why the global and local regulation of deepfake technology must be expedited, one needs to delve deeper into the consequences of the misuse of deepfakes and its societal implications.
Legal scholars have forewarned us of the threats to democratic discourse that these deceptive depictions may pose. Consider a widely shared clip of a political candidate expressing an anti-Semitic viewpoint caught on tape. It becomes easier to label accurate representations of campaign-trail utterances as fabricated in a world where doctored films are the norm, an evasion that will be hard to disprove. Deepfakes, as compared to other synthetic media and fake news, have a more pernicious effect by fostering a zero-trust culture where individuals are unable or unwilling to discern fact from fiction. This epistemological chaos causes voters to remain within their partisan bubbles, relying solely on politicians and news sources that align with their political beliefs. This can lead to an irrevocable breakdown of healthy democratic debate, erode trust in journalistic institutions and inflict irreparable damage on the reputation of prominent individuals, including elected officials.
Closer home, employing deepfakes as a tool in political campaigning is gaining steam. During the 2020 Legislative Assembly elections in Delhi, former BJP State President Manoj Tiwari released deepfake videos of himself criticising the incumbent Delhi government of Arvind Kejriwal in both English and Hindi. While the goal was to create “positive campaigns” to reach different linguistic voter bases, it marked the debut of deepfakes in election campaigns in India. Although not objectionable per se, when you add such technology to the standard political toolkit, it will alter the playbook of future campaigns.
The discourse around the pornographic aspect of deepfakes is far more limited than around its political aspects. As of September 2019, 96 per cent of deepfake videos online were pornographic, primarily categorised as “revenge porn”, according to the report of a company called Sensity. The issue is evidently gendered since almost all of them were non-consensual videos of women, featuring but not limited to, public figures like Emma Watson and Scarlett Johnson. The next edition of the report titled ‘The State of Deepfakes 2020’ indicates that over 85,000 harmful, expert-made deepfake videos were detected up to December 2020 alone.
The misuse of sexual deepfakes or Synthetic Sexually Explicit Material is not limited to self-gratification but can also be used to harass and blackmail victims of such abuse. This is especially alarming in a country like India, where the legality of pornography is currently ambiguous, due to which critical issues such as revenge porn remain largely unreported and unresolved. The existing laws are ill-equipped to counter an offence of this calibre and victims are left helpless due to a lack of specific legislation regarding manipulated media.
Taking cognisance of this, Revenge Porn Helpline, a UK-based organisation published a detailed report in 2020 titled ‘Intimate image abuse; an evolving landscape’, exploring the usage of advanced technology for image abuse, its effects, and the severity of the damage. In India, they have collaborated with Parihar, an initiative of the Bengaluru City Police for women and child welfare, to provide services and assistance to victims of revenge porn and deepfakes. However, data on the number of people who contact Parihar and how they assist victims in such cases is limited.
How to detect a deepfake? Although superficially convincing, deepfakes can be distinguished from real videos easily if you know what to look out for. The biggest giveaways are audio flaws, awkward shadows and soft or blurred areas during movement.
As the technology used to create fake digital content advances, automating their detection, which has been the primary focus, will become impossible, rendering cybersecurity’s perpetual cat-and-mouse game untenable.
It is becoming increasingly clear that combating the challenge posed by the unregulated use of deepfakes requires an amalgamation of technological innovations and legislative solutions.
The law does not evolve as quickly as technology does. However, certain jurisdictions, for instance, the European Union, have tried to keep up. The EU updated its Code of Practice on Disinformation to counter the spread of disinformation via deepfakes, including provisions penalising organisations such as Meta for up to 6 per cent of their annual global turnover, if found non-compliant.
In India, sections of the Information Technology Act 2000, criminalise the publication and transmission of intimate photos of any person without their consent and deal with the obligations of intermediaries. Provisions of the Copyright Act 1957, concerning the doctrine of fair dealing and right to integrity can be applied. Furthermore, deepfakes directly violate the fundamental right to privacy under Article 21 of the Constitution. If effectively implemented, privacy laws such as the new Digital Personal Data Protection Bill could be the most effective means of regulating deepfakes in India.
The creation and use of deepfakes will continue to grow as machine-learning algorithms become more sophisticated. AI and market-driven solutions will shape deepfake regulation. Facebook’s Deepfake Detection Challenge aimed towards encouraging and incentivising innovation in this regard, is a positive step forward. Operation Minerva uses technology to compare and detect deepfakes by cross-referencing with their catalogue of digitally fingerprinted videos, alerting users if a potentially doctored version of the existing media is detected.
It has become apparent that collaborative effort is indispensable. Nina Schick, author and expert in Generative AI believes that technologists, domain-specific experts, policy officials and lawmakers must all come together to combat this misuse of deepfakes.
Prakash is a research fellow at Vidhi Centre for Legal Policy and Shah is a recent graduate from Government Law College, Mumbai