
Public conversation around technology governance in recent times invariably proceeds from an axiom: Regulation and policy are at least a step behind innovation. This has indeed been the case historically, from the printing press to the rise of social media. But it need not be so with AI.
The public launch of Large Language Model (LLM) ChatGPT by OpenAI in November 2022 has disrupted the technology space. 2023 was witness to the democratisation of LLMs — never before has AI this sophisticated, with such human-like interfaces and vast knowledge, been so easily available. ChatGPT also brought into the open the race between the biggest companies in the world including Google, and between China and the US, to develop Artificial General Intelligence (AGI). This year saw a watershed moment in technology, as well as questions around its regulation.
The panic around AI holds twin dangers. First, it could invite a regulatory response that stifles the potential of a tool that can aid innovation and make knowledge and skills more accessible. Second, it may obfuscate the issues that incrementally end up making AI more harmful than beneficial. The counter to plagiarism needs to evolve. Questions of authorship and copyright are already being litigated in some jurisdictions — for example, with The New York Times case against OpenAI. The use of machine learning software for surveillance, facial recognition and predictive policing has major implications for privacy and human rights. Addressing these concerns requires more than the expertise of the engineer and the technologist. In 2024, society needs to drive technology, rather than the other way around.