The first phase of voting for the Lok Sabha elections is on Friday (April 19). Over the past few weeks, there has been a deluge of disinformation and manipulated media online.
Two videos of actor Aamir Khan went viral this week. Both were manipulated versions of a promo for Khan’s popular TV show, Satyamev Jayate. In one, Khan appears to be explicitly supporting the Congress party, while in the other, he is seen speaking about nyay (justice) — a key Congress talking point in recent years, and the title of its manifesto (Nyay Patra or ‘Document [for] Justice’).
Recently, actor Ranveer Singh too was a victim of deepfake technology, when a manipulated video of him criticising Prime Minister Narendra Modi on the issues of unemployment and inflation was widely shared. In the original clip, however, Ranveer was actually praising the prime minister.
Here is how these deepfake videos are made — and how you can spot them
itisaar.ai, an AI detection tool developed in collaboration with IIT Jodhpur, shows that these videos were generated using ‘voice swap’ technology.
As the name suggests, this refers to the process of using an AI algorithm to either alter or mimic an individual’s voice. The technology also allows the creators to change the characteristics of a voice, such as accent, tone, pitch, and speech patterns to make the videos more realistic.
Currently, there are several easy-to-use AI voice swap tools available for free. The creator has to simply upload or record the audio sample that she wants to replace, and then customise the settings to make the uploaded sample sound as realistic as possible.
While it is not easy to spot well-produced deepfakes, here are some tips to keep in mind while scrolling through social media, especially during election time.
Verify sources: Be cautious of audio or video content from unfamiliar sources, especially if it seems controversial or sensational. Verify the authenticity of any suspicious post by cross-referencing with reliable sources, and trustworthy media organisations.
Listen for anomalies: Deepfake audio may exhibit subtle anomalies, such as the voice’s unnatural tenor, slightly robotic speech, and irregular pauses. Listen closely for these telltale signs of manipulated or synthetic speech.
Scrutinise visual content: Deepfake audio is often accompanied by manipulated visual content, such as altered video footage. Check both audio and visuals elements for any discrepancies or inconsistencies. For instance, if lips do not move in sync with the speech, the video you are seeing may be manipulated.
Stay informed: Staying updated about day-to-day news and events is key to recognising the risks associated with deepfakes. It is harder to fool people who have general awareness of what is happening around them.
Use AI voice detectors: A few AI detectors, such as Optic’s ‘AI or Not’ are available to be used for free. You can upload any suspicious audio or video onto such detectors, which will tell you the authenticity of any content.