Premium
This is an archive article published on November 16, 2023

As elections approach, how to deal with audio deepfakes?

What are audio deepfakes? Why are they so hard to recognise? And how to spot one?

AudioAudio deepfakes have become increasingly hard to spot. (Pixabay)
Listen to this article
As elections approach, how to deal with audio deepfakes?
x
00:00
1x 1.5x 1.8x

Generative AI has gotten very good with time, good enough to make you question the audiovisual content you consume daily. It is now extremely easy to clone someone’s voice to create fake audio or video clips — in other words, to create deepfake audios.

In the arena of electoral politics, such cloning can be put to dangerous use, spreading misinformation in an all new effective way. Just clone the voice of any political leader, superimpose the audio onto an existing video clip, and share.

Take for instance the above Instagram handle, which creates AI voice clones of Prime Minister Narendra Modi and Kerala Chief Minister Pinarayi Vijayan. While in this case, the creator does mention that these videos are fake and for entertainment purposes only, if shared without a watermark they are likely to fool many a lay person.

So, how does one recognise such doctored content?

First, what are AI voice clones or deepfake audios?

AI voice clones or deepfake audios refer to the use of artificial intelligence (AI) technology, particularly deep learning algorithms, to generate synthetic or manipulated voice recordings that mimic the voice of a specific individual. The technology has advanced considerably in recent years and now it is possible to create highly realistic and convincing audio forgeries.

How exactly are AI voice clones made?

Creating clones of anyones voice is very easy. All you need is a laptop with a good internet connection and the audio clip of the person whose voice you wish to clone.

We spoke to Siva, the creator behind the above-mentioned Instagram handle. “Using the website covers.ai, one can simply upload an audio and then select the voice they want, the audio will be ready within 5 minutes,” he said. Also anyone can create their voice clone with that website by paying only Rs 399. They need to upload a good-quality audio clip of the voice which is at least 3 minutes long, and then wait for a week. The website will create their AI voice clone and they can create any song or audio with that for lifetime,” he explained.

There are other online tools like play.ht and Eleven Labs that can be used to create AI voice clones easily. There are also several tutorials available on YouTube on making AI voice clones.

https://www.youtube.com/watch?v=sA8qWKOhS9M

Why has spotting audio deepfakes has become hard?

Earlier, audio deepfakes were fairly robotic and unrealistic, making them easy to detect. However, technology has progressed significantly since then.

Story continues below this ad

With the help of advanced AI, deepfake videos and images are being increasingly created by taking advantage of content posted on public social media profiles,Aaron Bugal, field CTO, Asia Pacific and Japan, Sophos, told The Indian Express. While setting social profiles to private and limiting them to only known friends or contacts can help limit overt exposure, it isn’t a guarantee that someone among them won’t repost it or use it for nefarious purposes,” he said.

Sophos is a worldwide leader and innovator of advanced cybersecurity solutions.

So, what can be done?

One way to deal with deepfakes is for the authorities to crack down on social media platforms for it.

It is reassuring to see the Indian Ministry of Electronics and Information Technology (MeitY) sent an advisory to social media companies urging them to tackle deep-fake content. In the advisory, the government also warned social media intermediaries that failing to remove deepfake information from their platforms might result in penalties such as losing safe harbour rights, among other things. Such stringent advice from the government can help to flatten the curve of data being exploited to create deepfake content,” Bugal said.

Story continues below this ad

As a protective measure, digitally signed videos can be a way to verify that content can be trusted. Much like how certificates are used to validate website security and email communications, the same could be used for validating digital media. As technology evolves and deepfake production times shrink and quality vastly improves, a point may come where it’s impossible to distinguish a deepfake from real recorded content; therefore, validating content as true using a signing or verification process is needed,” he added.

How can you identify deepfakes?

Dealing with AI voice clones and potential audio deepfakes requires vigilance and proactive measures. Here are three steps you should follow whenever you see a video or audio clip on social media.

  1. Stay informed: Keep yourself updated about the latest political developments and the statements made by key political leaders so that you don’t fall into the trap of a widely shared audio or video clip of a politician making a controversial statement.
  2. Verify before you share: Verify the source of the audio or video clip. If the source is not reliable, avoid sharing.
  3. Use AI detection tools if possible: There are certain AI voice detectors available online, however unlike AI voice cloning tools, these are not free. Some of these are aivoicedetector.com, play.ht can also be used to detect AI voices.

Why should we be extra vigilant ahead of elections?

There are several audio clips of political leaders available online. And what is required to create a flawless audio clone is hardly a one or two minute clip. During elections, this misinformation can influence public opinion, damage the reputation of candidates, and manipulate the democratic process.

Also, sections of the Indian Penal Code and IT Act can be invoked by police against those spreading misinformation and fake news.

Ankita Deshkar is a Deputy Copy Editor and a dedicated fact-checker at The Indian Express. Based in Maharashtra, she specializes in bridging the gap between technical complexity and public understanding. With a deep focus on Cyber Law, Information Technology, and Public Safety, she leads "The Safe Side" series, where she deconstructs emerging digital threats and financial scams. Ankita is also a certified trainer for the Google News Initiative (GNI) India Training Network, specializing in online verification and the fight against misinformation. She is also an AI trainer with ADiRA (AI for Digital Readiness and Advancement) Professional Background & Expertise Role: Fact-checker & Deputy Copy Editor, The Indian Express Experience: Started working in 2016 Ankita brings a unique multidisciplinary background to her journalism, combining engineering logic with mass communication expertise. Her work often intersects regional governance, wildlife conservation, and digital rights, making her a leading voice on issues affecting Central India, particularly the Vidarbha region. Key focus areas include: Fact-Checking & Verification: As a GNI-certified trainer, she conducts workshops on debunking deepfakes, verifying viral claims, and using OSINT (Open Source Intelligence) tools. Cyber Law & IT: With postgraduate specialization in Cyber Law, she decodes the legalities of data privacy, digital fraud, and the evolving landscape of intellectual property rights. Public Safety & Health: Through her "The Safe Side" column, she provides actionable intelligence on avoiding "juice jacking," "e-SIM scams," and digital extortion. Regional Reporting: She provides on-ground coverage of high-stakes issues in Maharashtra, from Maoist surrenders in Gadchiroli to critical healthcare updates and wildlife-human conflict in Nagpur. Education & Credentials Ankita is currently pursuing her PhD in Mass Communication and Journalism, focusing on the non-verbal communication through Indian classical dance forms. Her academic foundation includes: MA in Mass Communication (RTM Nagpur University) Bachelors in Electrical Engineering (RTM Nagpur University) Post Graduate Diploma (PGTD) in Cyber Law and Information Technology Specialization in Intellectual Property Rights Recent Notable Coverage Ankita’s reportage is recognized for its investigative depth and emphasis on accountability: Cyber Security: "Lost money to a scam? Act within the 'golden hour' or risk losing it all" — A deep dive into the critical window for freezing fraudulent transactions. Public Health: "From deep coma to recovery: First fully recovered Coldrif patient discharged" — Investigating the aftermath of pharmaceutical toxins and the healthcare response. Governance & Conflict: "Gadchiroli now looks like any normal city: SP Neelotpal" — An analysis of the socio-political shift in Maoist-affected regions. Signature Beat Ankita is best known for her ability to translate "technical jargon into human stories." Whether she is explaining how AI tools like MahaCrimeOS assist the police or exposing the dire conditions of wildlife transit centres, her writing serves as a bridge between specialized knowledge and everyday safety. Contact & Follow X (Twitter): @ankita_deshkar Email: ankita.deshkar@indianexpress.com   ... Read More

 

Latest Comment
Post Comment
Read Comments
Advertisement
Advertisement
Advertisement
Advertisement