
AI-generated content has proliferated rapidly in the last few years. Access to free or nominally-priced AI models has made it possible for anyone in any part of the world to create content (video clips, images and audio) and upload it on the net. Indeed, AI fuels the fastest growing YouTube channels. The surge, which will only grow in sophistication, throws up a bewildering array of challenges. AI-generated deepfakes are used to deceive, spread misinformation, and facilitate financial fraud. Governments are now taking steps to address these concerns. These are moves in the right direction. The viewer should be enabled to differentiate between the real and the artificial.
AI now is inextricably embedded in platforms and the sheer volume of content on social media warrant greater urgency in addressing the problem of AI misuse. AI can be used to spread information, educate, and entertain. But, it can also be a tool for misinformation and polarising pursuits. Deepfakes that clone voices can be misused to defraud people. A distinction, therefore, has to be made in the nature of content and the motives for using AI. AI models themselves need to be trained on models more inclusive and representative — the challenges will only go up with the inevitable increase in their sophistication levels. Policy has always played catch-up with technology so regulation will have to be sharper, smarter and speedier. Rules on disclosure are a good first step but in the long, twisting road ahead, the next steps are crucial.