Google’s new Nano Banana AI tool has been dominating Instagram trends lately. The app – essentially a rebranded version of Google’s Flash 2.5 image-generation model – lets people turn ordinary selfies into hyper-realistic 3D figurines. One viral feature even creates Polaroid-style photos of your present self embracing your younger self.
But what started as a harmless experiment has begun to spark concern. An Instagram user recently shared an unsettling experience with Nano Banana AI, raising questions about how much personal information these tools might be drawing from.
She explained that she had been eager to try Google’s saree-image-generation feature, which runs on its Flash 2.0 model. “I generated my image and I found something creepy… so a trend is going viral on Instagram where you upload your image on Gemini with a prompt and Gemini converts it into saree… I tried it last night and I found something very creepy on this,” she wrote.
Her biggest shock came when the AI-generated image included a mole on her body that she hadn’t mentioned in the prompt. “How Gemini knows I have mole in this part of my body? You can see this mole… this is very scary, very creepy… I am still not sure how this happened,” she said, urging followers to be careful about what they upload to AI platforms.
View this post on Instagram
The video quickly spread across Instagram, and the comments poured in. One user offered a simple solution: “Best option is to dress up in a saree and click some nice candid pictures yourself.”
Another user responded in Hindi, writing “Normal hai yar” (It’s normal, man). They added that because Google owns Gemini and also has access to Google Photos, “better result ke liye use kiya hai isne apki purani pictures ka” (it has probably used your older pictures for a better result). “Google knows everything about you,” they concluded.
A third person said the experience shouldn’t be surprising: “That is exactly how AI works. AI draws information from your digital footprint, from all the images you’ve been uploading online… Our over-sharing of information is the real issue.”
But not everyone agreed with the woman’s claims. One commenter criticised her post, writing, “A girl with 0 knowledge about LLM AI models. Stop spreading misinformation about data and all. Gemini is the most concerned model in the world in terms of explicit content and data security… Half knowledge is too dangerous.” They argued that the mole was likely a coincidence and not evidence of hidden data collection.
They went on to suggest that if privacy is a major worry, “delete your account from every platform and stop using WhatsApp — use Telegram or Signal instead.” They also pointed out that Meta’s policy allows it to use publicly shared content from its platforms, including Instagram photos, captions, and posts, to train its generative AI models.