TTP said 14 of the nudify apps came from China, adding a security threat.
Apple and Google’s app stores are hosting dozens of “nudify” apps that use artificial intelligence to produce deepfake nude images of real people, according to a report by watchdog group Tech Transparency Project. The revelation has sparked widespread concern online, with both tech giants facing questions over how such apps were allowed to flourish on their platforms despite stringent app store rules.
In its investigation, TTP found 55 such “nudify” apps on the Google Play Store and 47 on the Apple App Store. Both platforms also continue to offer access to xAI’s Grok, which is widely regarded as one of the most prominent tools used to create non-consensual deepfake images.
The Tech Transparency Project searched both app stores using terms like “nudify” and “undress,” and tested the apps with AI-generated images. The tests showed that many apps could digitally remove clothing or superimpose faces onto nude bodies.
What is particularly worrying is that these apps have collectively been downloaded more than 700 million times and generated over $117 million in revenue. Apple and Google receive a cut of this revenue.
The investigation further revealed that many of the apps named in the report are rated as suitable for teens and children. An app called DreamFace, for instance, is rated suitable for ages 13 and up on the Google Play Store and ages nine and up on the Apple App Store.
Both companies have responded to the investigation. Apple said it has removed 24 apps from its store, according to a report by CNBC. However, this falls short of the 47 apps identified by TTP researchers. A Google spokesperson said the company has suspended several apps referenced in the report for violating store policies, but declined to specify how many have been removed.
The report comes after Elon Musk’s Grok was found to be generating sexualised images of both women and children. In total, the AI chatbot reportedly generated around three million sexualised images and 22,000 images involving children over a period of 11 days. X’s safety account said that “anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
“Nudification” apps and websites use generative artificial intelligence to create realistic deepfake nude images of real people by removing clothing or manipulating photos, often targeting women and children. These tools are trained on large image datasets and have serious mental health implications for women and girls, who make up the vast majority of sexually explicit deepfakes online.
While it is illegal to possess AI-generated sexual content involving children, the AI models used to create these images are not illegal. In December, the UK government said it plans to ban “nudification” apps by making it illegal to create or distribute AI tools that digitally remove clothing from images, as part of a broader effort to combat misogyny and reduce violence against women and girls.