Opinion On misuse of its AI tools, Big Tech can’t pass the buck

AI can be transformative. Soon, it may even become indispensable. But the rapid scaling of technology must not come at the cost of users’ safety and privacy

On misuse of AI tools, Big Tech can’t pass buckThe “move fast and break things” attitude that has long characterised Silicon Valley is not compatible with the difficult task of building public trust.
3 min readJan 7, 2026 08:11 AM IST First published on: Jan 7, 2026 at 08:11 AM IST

The new year began on a jarring note for many women on X (formerly Twitter) whose photographs were manipulated into sexually explicit images using the AI chatbot Grok. In the days since, the flood of objectionable content — including of minors — has done little to ease concerns about the misuse of this technology. Alarmed by the proliferation of non-consensual images, authorities worldwide have urged X to take action. These include the Government of India, which warned the company over its “serious failure” on enforcement of safeguards and about the violation of provisions of the IT Rules, 2021 and the Bharatiya Nagarik Suraksha Sanhita, 2023. X’s response to the widespread outrage has been far from adequate. It posted on Sunday that “anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content”, virtually shrugging off any responsibility for a tool that can so easily be turned into a weapon of harassment.

This is, however, a problem that goes beyond one company. The creation of non-consensual, sexually explicit imagery predates AI, certainly. But what once required sophisticated software and at least some technical skill is now accomplished with a mere prompt. In October 2025, this newspaper reported on the existence of several accounts on X and Instagram that routinely share deepfake videos of celebrities, particularly women. In the case of Grok, which is integrated with X and therefore able to access information and share content in real time, the problem is magnified because of the ease with which such images can spread. While companies like Meta and Google have some form of AI labelling on their platforms, enforcement has been patchy. Most measures, including the taking down of flagged content, are reactive, and far too dependent on reporting by individual users.

Advertisement

AI can be transformative. Soon, it may even become indispensable. But the rapid scaling of technology must not come at the cost of users’ safety and privacy. The “move fast and break things” attitude that has long characterised Silicon Valley is not compatible with the difficult task of building public trust. As it continues to seek “safe harbour protections”, Big Tech must ensure that stronger safeguards are built into a technology that is rapidly getting integrated into the daily life of users. Until then, calls for legal immunity and public confidence will ring hollow.

Latest Comment
Post Comment
Read Comments