Taylor Swift attends a premiere for Taylor Swift: The Eras Tour in Los Angeles, California, U.S., October 11, 2023. REUTERS/Mario Anzuoni/File PhotoDeeply offensive, sexually explicit images of popular musician Taylor Swift have been proliferating widely on X, the platform formerly known as Twitter, over the past two days. The images were reshared millions of times on the platform and have drawn condemnation from wide quarters, with everyone from the singer’s fans to the White House raising concerns.
One of the most shared posts containing the images got more than 45 million views, 24,000 reposts and hundreds of thousands of likes and bookmarks over a period of 17 hours before the post was removed from the platform, according to The Verge. In a tragic instance of the Barbara Streisand incident playing out, the images gained more attention and were shared more widely as furious fans and other users raised concerns on X. “Taylor Swift AI” was trending on the platform for two days.
Thankfully, Swifties, the name given to the massive global crowd of the singer’s enthusiastic fans, drowned out the search term on the platform by making different posts with the same keywords. However, some explicit images still remain on the platform, according to various reports.
404 Media traced the AI deepkake Taylor Swift images to a specific Telegram group dedicated to abusive images of women. One of the tools that the group uses is a free Microsoft text-to-image generator.
Importantly, these images are not really “deepfakes” if you go by the original definition of the word. Deepfake originally referred to images or videos created using adversarial networks that are trained on one face and replace it on another body. Basically, instead of using AI to add Taylor Swift’s face onto a real pornographic image, this one was made from scratch using generative AI.
The Telegram group in question recommended to its users that they use Microsoft’s AI image generator called Designers. Users in the group also shared some prompts that helped people get around the protections that Microsoft has put in place. Before the images became viral, members of the group told users to use “Taylor ‘singer’ swift” instead of “Taylor Swift” to circumvent restrictions. While 404 Media was not able to recreate the type of images that were posted to X, they did find that Microsoft’s Designer would generate images of “Taylor ‘singer’ Swift” even though it did not work with “Taylor Swift.”
The White House said on Friday that it was “alarmed” by the deepfakes and added that social media companies have an important role to play in enforcing their own rules and preventing misinformation.
“This is very alarming. And so, we’re going to do what we can to deal with this issue. So while social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing, enforcing their own rules to prevent the spread of misinformation, and non-consensual, intimate imagery of real people,” said White House Press Secretary Karine Jean-Pierre said at a news briefing, reports Reuters. Jean-Pierre added that the US Congress should take legislative action.
SAG-AFTRA (Screen Actors Guild-American Federation of Television and Radio Artists), the American labour union that represents over 160,000 professionals in the media industry, also condemned the deepfake images in a statement.
“As a society, we have it in our power to control these technologies, but we must act now before it is too late. SAG-AFTRA continues to support legislation by Congressman Joe Morelle, the Preventing Deepfakes of Intimate Images Act, to make sure we stop exploitation of this nature from happening again. We support Taylor, and women everywhere who are the victims of this kind of theft of their privacy and right to autonomy,” said the statement from the union.
The Elon Musk-owned social media platform also issued its own reaction without directly referring to either Taylor Swift or AI deepfakes.