📣 For more lifestyle news, click here to join our WhatsApp Channel and also follow us on Instagram
Actor and politician Hema Malini recently raised serious concerns regarding the misuse of AI and ‘deepfake’ technology, especially in targeting celebrities.
Speaking in Lok Sabha during the Zero Hour, she highlighted the growing dangers posed by these advancements, stressing that while AI has several benefits, it is also being used ruthlessly to create fake content that tarnishes reputations. “Many of us have become victims of this ruthless misuse which creates multiple fake videos, tarnishing the image of the person concerned. These go viral and cause tremendous impact on the victim’s mental health. This cannot be taken lightly,” she stated during her speech.
Hema’s concerns echo those of several other celebrities who have fallen prey to AI-generated deepfake videos that have been widely circulating on social media. Recently, Vidya Balan issued a public warning, clarifying that AI-generated content appearing to feature her was fake, urging the public to verify such videos before sharing them. Hema also said, “There are multiple videos currently circulating on social media and WhatsApp, which appear to feature me. However, I want to clarify that the videos are AI-generated and inauthentic. I have no involvement in its creation or dissemination, nor do I endorse its content in any way.”
She continued, “Any claims made in the videos should not be attributed to me, as it does not reflect my views or work. I urge everyone to verify information before sharing and be cautious of misleading AI-generated content.”
The growing prevalence of AI misuse in spreading misinformation has raised valid concerns among many. But, should you be worried too?
Poras Pratap Singh, founder at Neurix.ai, says, “Being a victim of deepfake content can be deeply distressing, especially when the content spreads rapidly online. For public figures and everyday individuals alike, the loss of control over one’s own image can lead to anxiety, reputational damage, and emotional distress. Victims often feel violated, as deepfakes manipulate their identity in ways they never consented to.”
The psychological impact is heightened by the difficulty of disproving a deepfake once it has gained traction, Singh says. “In cases where deepfake videos are used for malicious purposes, such as spreading misinformation or creating explicit content, the emotional toll can be severe, sometimes leading to trust issues, social withdrawal, and mental health struggles.”
The viral nature of deepfakes amplifies this problem, as content can spread across multiple platforms within minutes, making it nearly impossible to contain. This highlights the urgent need for technological and legal safeguards to protect individuals from misuse.
For individuals, Singh suggests that maintaining strong digital hygiene is essential. “Avoid sharing high-resolution images and videos publicly unless necessary, as deepfake models rely on such data for training. Using privacy settings on social media to limit exposure and enabling identity verification features can also help.”
He adds, “Organisations, especially media houses and brands, should invest in deepfake detection tools powered by AI to authenticate content before it is published or amplified. Cybersecurity awareness training for employees and public figures is also crucial, ensuring they can recognise and report suspicious digital activity.”
Legal preparedness is another key aspect, notes the expert. “Victims should be aware of existing cyber laws and have rapid response strategies in place to report and take down deepfake content when necessary. Public figures and influencers may also consider watermarking their official video content to establish authenticity,” concludes Singh.