Premium
This is an archive article published on March 28, 2024

More than deepfakes, shallow fakes should worry everyone

Shallow fakes or cheap fakes are pictures, videos and voice clips created without the help of AI technology but by either editing or by using other simple software tools.

Kamala HarrisOne popular example of shallow fake is a video of US Vice-President Kamala Harris saying during a speech, “Today is today, and yesterday is today”. (Representational image/Photo/X/@VP)

For elections, 2024 is going to be a record-breaking year. An unprecedented number of voters will participate in elections globally, with elections scheduled in over 50 nations, which are home to half the planet’s population.

Elections and misinformation often go hand in hand. This year, fighting misinformation will be tougher, because not just the old, traditional types of misinformation will be shared online, but the deepfakes and generative artificial intelligence will make matters much worse. But more than deepfakes, what we must be worried about is shallow fakes, or cheap fakes.

Before the advent of AI technology, certain traditional methods were used for editing images and videos. They were not very good but got the job done for fake-news peddlers.

Shallow fakes or cheap fakes are pictures, videos and voice clips created without the help of AI technology but by either editing or by using other simple software tools. Shallow fake videos are manually altered or selectively edited. They can be created easily, in some cases it can be just a clipped video being shared without any context.

What is the difference between deepfakes and shallow fakes?

According to Sam Gregory, executive director at witness.org, “Deepfakes describe photorealistic and audio-realistic images, video and audio created or manipulated with artificial intelligence to deceive. Shallow fakes or cheap fakes are made with existing technologies—for example a conventional edit on a photo, or slowing-down a video to change the speech patterns of an individual, or more often rely on mis-captioning or mis-contextualising an existing image or video, claiming it is from a time or place which it is not from. For example, you might use an image from last year of a protest from one state around land rights, and present it as a protest from yesterday in another location.”

Witness says it helps people use video and technology to protect and defend human rights.

One popular example of shallow fake is a video of US Vice-President Kamala Harris saying during a speech, “Today is today, and yesterday is today”

Story continues below this ad

The original footage was from a speech made during an abortion rights rally at Howard University in April, 2023.

With the Lok Sabha elections scheduled to start on April 19, social media platforms are abuzz with misinformation, mostly in the form of shallow fakes.

Examples of shallow fakes:

The war has begun with political parties’ social media handles sharing shallow fakes to mock their rivals. One such shallow fake was recently shared on the Congress’s official X handle. The party took a dig at Prime Minister Narendra Modi with a morphed image where he was seen standing in front of a picture frame featuring wrestler Vinesh Phogat in tears. The image was first shared by the handle on June 1, 2023, and then again on March 18.

BJP’s official Instagram handle too mocked Congress leader Rahul Gandhi with a shallow fake.

Story continues below this ad

The influx of shallow fakes on social media is increasing every day. One such shallow fake showed Union minister Smriti Irani in a belly dance outfit.

Another video showed AIMIM leader Asaduddin Owaisi singing Shiv Tandav stotra.

Do shallow fakes affect the electoral process?

“Cheap fakes and shallow fakes are already pervasive in electoral environments. For example, with claimed images of ballot-boxes recycled from one context to another or used with a deceptive explanation (for example, claiming vote fraud) or slowed-down videos that show a candidate as impaired physically. Crudely manipulated sexualised images were used to target women even before deepfake technology made it much easier to create non-consensual sexual or intimate images of women (and sometimes men),” said Sam Gregory.

Azahar Machwe, who works at Lloyds Banking Group to create strategy for the adoption of emerging AI capabilities, said, “Shallow fakes impact the electoral process, especially the content where audio is replaced. Audio is trivial now. These fakes can be made even with a very small voice sample. The video can be picked from any place and the audio can be modified with a high level of accuracy. During elections viral content can influence in really short time.”

Have the avenues to create cheap fakes increased with time?

“Cheap fakes and shallow fakes have been easy to make since the dawn of images on the internet. They primarily rely on taking existing content and changing its context. And the increasing ease of photo and video editing tools has made it progressively easier to create shallow fakes,” Gregory said.

Story continues below this ad

Machwe said the line between deepfakes and shallow fakes is blurring and that their volumes are increasing, especially in places where it is difficult to verify the source. “Most of the fakes are moving to AI-modified or AI-created as AI capabilities are now easily accessible via free apps on smartphones,” he added.

Most misinformation involves reconfiguration

A study conducted early in the pandemic by the Reuters Institute for Journalism at Oxford University found that 59 per cent of the misinformation involved various forms of reconfiguration, where existing and often true information is spun, twisted, recontextualised or reworked.

“Less misinformation (38%) was completely fabricated. Despite a great deal of recent concern, we find no examples of deep fakes in our sample. Instead, the manipulated content includes ‘cheap fakes’ produced using much simpler tools. The reconfigured misinformation accounts for 87% of social media interactions in the sample; the fabricated content, for 12%.”

Gregory said that platforms had taken down significant quantities of reconfigured cheap fake and shallow fake content during the Covid pandemic.

Story continues below this ad

“To protect against both deepfakes and shallow fakes it’s best to start with media literacy. For deepfakes it’s not a good idea to hope that ordinary social media users can spot ‘glitches’ left by the generative processes in an image or guess whether an audio clone is fraudulent, as these signals are not always easy to discern,” Gregory said.

“Instead it’s best to start with both shallow fakes and deep fakes with a media literacy approach. I use the SIFT method from the academic Mike Caulfield. Stop (as your emotions are likely being triggered by the images or videos you are seeing), Investigate the source; Find alternative coverage and Trace the original,” he said.

“Particularly for shallow fake images, tracing the original can be as simple as doing a Google image search to see if the image pre-existed the claim and is from a different context, or has been manipulated or edited,” he added.

According to the World Economic Forum’s Global Risk Report 2024, India ranks first in facing the risk of misinformation and disinformation.

Story continues below this ad

Chief Election Commissioner Rajiv Kumar has acknowledged that addressing misinformation in the digital age presents a complex challenge and advised political parties to demonstrate responsible behaviour, thereby underscoring the looming threat of misinformation.

Ankita Deshkar is a Deputy Copy Editor and a dedicated fact-checker at The Indian Express. Based in Maharashtra, she specializes in bridging the gap between technical complexity and public understanding. With a deep focus on Cyber Law, Information Technology, and Public Safety, she leads "The Safe Side" series, where she deconstructs emerging digital threats and financial scams. Ankita is also a certified trainer for the Google News Initiative (GNI) India Training Network, specializing in online verification and the fight against misinformation. She is also an AI trainer with ADiRA (AI for Digital Readiness and Advancement) Professional Background & Expertise Role: Fact-checker & Deputy Copy Editor, The Indian Express Experience: Started working in 2016 Ankita brings a unique multidisciplinary background to her journalism, combining engineering logic with mass communication expertise. Her work often intersects regional governance, wildlife conservation, and digital rights, making her a leading voice on issues affecting Central India, particularly the Vidarbha region. Key focus areas include: Fact-Checking & Verification: As a GNI-certified trainer, she conducts workshops on debunking deepfakes, verifying viral claims, and using OSINT (Open Source Intelligence) tools. Cyber Law & IT: With postgraduate specialization in Cyber Law, she decodes the legalities of data privacy, digital fraud, and the evolving landscape of intellectual property rights. Public Safety & Health: Through her "The Safe Side" column, she provides actionable intelligence on avoiding "juice jacking," "e-SIM scams," and digital extortion. Regional Reporting: She provides on-ground coverage of high-stakes issues in Maharashtra, from Maoist surrenders in Gadchiroli to critical healthcare updates and wildlife-human conflict in Nagpur. Education & Credentials Ankita is currently pursuing her PhD in Mass Communication and Journalism, focusing on the non-verbal communication through Indian classical dance forms. Her academic foundation includes: MA in Mass Communication (RTM Nagpur University) Bachelors in Electrical Engineering (RTM Nagpur University) Post Graduate Diploma (PGTD) in Cyber Law and Information Technology Specialization in Intellectual Property Rights Recent Notable Coverage Ankita’s reportage is recognized for its investigative depth and emphasis on accountability: Cyber Security: "Lost money to a scam? Act within the 'golden hour' or risk losing it all" — A deep dive into the critical window for freezing fraudulent transactions. Public Health: "From deep coma to recovery: First fully recovered Coldrif patient discharged" — Investigating the aftermath of pharmaceutical toxins and the healthcare response. Governance & Conflict: "Gadchiroli now looks like any normal city: SP Neelotpal" — An analysis of the socio-political shift in Maoist-affected regions. Signature Beat Ankita is best known for her ability to translate "technical jargon into human stories." Whether she is explaining how AI tools like MahaCrimeOS assist the police or exposing the dire conditions of wildlife transit centres, her writing serves as a bridge between specialized knowledge and everyday safety. Contact & Follow X (Twitter): @ankita_deshkar Email: ankita.deshkar@indianexpress.com   ... Read More

 

Advertisement
Loading Recommendations...
Latest Comment
Post Comment
Read Comments