Cybercrime officials in India have been tracking certain apps and websites that produce nude photographs of innocent persons using Artificial Intelligence (AI) algorithms. These images are then used to blackmail victims, seek revenge or commit fraud on social networking and dating sites.
Cybercriminals use Artificial Intelligence (AI) software — now easily available on apps and websites — to superimpose a digital composite (assembling multiple media files to make a final one) on to an existing video, photo or audio.
Deep nudes are computer-generated images and videos. In March 2018, a fake video of then US First Lady Michelle Obama appeared on Reddit. An app called FakeApp was used to superimpose her face onto the video of a pornstar.
In 2017, a pornographic video surfaced on the Internet featuring actor Gal Gadot. Again, using the same AI technology. Other deepfake videos have used the facial features of Daisy Ridley, Scarlett Johansson, Maisie Williams, Taylor Swift and Aubrey Plaza.
And it’s not just restricted to nudes or pornography. In 2018, comedian Jordan Peele used Adobe After Effects and FakeApp to make a video in which former US President Barack Obama appears to be voicing his opinion on the Hollywood film Black Panther and commenting on current President Donald Trump. In the recent Delhi riots case, a Hindi video message of Delhi BJP president Manoj Tiwari was recreated with English audio.
Essentially, using AI algorithms a person’s words, head movements and expressions are transferred onto another person in a seamless fashion that makes it difficult to tell that it is a deepfake, unless one closely observes the media file.
In 2017, a Reddit user with the name “deepfakes” posted explicit videos of celebrities. Since then, several instances have been reported along with the development of apps and websites that were easily accessible to an average user.
The debate around “deep nudes” and “deep fakes” was rekindled in July 2019 with the popularity of applications such as FaceApp (used for photo-editing) and DeepNude that produces fake nudes of women.
Because of how realistic deepfake images, audio and videos can be, the technology is vulnerable for use by cybercriminals who could spread misinformation to intimidate or blackmail people. In a presentation, the Fayetteville State University in North Carolina called it one of the “modern” frauds of cyberspace, along with fake news, spam/phishing attacks, social engineering fraud, catfishing and academic fraud.
According to a CSIRO Scope article from August 2019, “Creating a convincing deepfake is an unlikely feat for the general computer user. But an individual with advanced knowledge of machine learning (the specific software needed to digitally alter a piece of content) and access to the victim’s publicly-available social media profile for photographic, video and audio content, could do so.”
Even so, there are various websites and applications that have AI built into them and have made it much easier for a lay users to create deepfakes and deep nudes. As the technology improves, the quality of deepfakes is also expected to get better.
📣 Express Explained is now on Telegram. Click here to join our channel (@ieexplained) and stay updated with the latest
According to a Vice article, with the emergence of tools such as Adobe VoCo, the Face2Face algorithm that can swap recorded videos with real-time face tracking and open-source codes, it is becoming easier “to fabricate believable videos of people doing and saying things they never did. Even having sex.”
At least in the US, the legality of deepfakes is complicated. While a person being harassed by deepfakes may claim defamation, removing such content could be considered censorship, a violation of the First Amendment which guarantees Americans the freedom concerning religion, expression, assembly and the right to petition.
According to the Cyber Civil Rights Initiative, 46 states in the US have “revenge porn” laws. Revenge porn refers to the creation of sexually explicit videos or images that are posted on the Internet without the consent of the subject as a way to harass them.
But in the case of deep nudes, even consent is hard to decipher since it is not the person’s actual body that is being used in the video, it is just their facial expressions. For instance, one argument to defend such deep nude pornographic videos could be that they are parody videos and hence protected under the First Amendment.
Don’t miss from Explained: 90 years on, remembering Peshawar’s Qissa Khwani Bazaar massacre
But there may be some hope in the form of a concept called “Right to be Forgotten”, which allows a user to request companies such as Facebook and Google, that have collected his/her data to take it down. According to a recent article published on the Washington Policy Center, privacy protection statutes in the European Union (EU) and those being drafted by some states in the US have introduced this concept.
According to the Cyberbullying Research Centre (CRC), catfishing refers to the practice of setting up fictitious online profiles, “most often for the purpose of luring another into a fraudulent romantic relationship.”
An article on CRC says that to “catfish” someone, “is to set up a fake social media profile with the goal of duping that person into falling for the false persona.”
While it is not easy to keep track of who downloads or misuses your images, the best way to protect yourself is to ensure you are using privacy settings on your social media profiles that suit you. If you feel your image has been used without your permission, you could use freely available reverse image search tools to find images that are similar to yours.
You can also be mindful of who you are conversing with on the web. A basic check of their social media profiles, comments on their images and whether similar profiles exist could help you determine if the person is genuine.