Opinion Satwik Mishra writes: Cautiously on AI
The challenges it presents demand creativity and responsibility

The G20 Delhi Declaration stresses the importance of responsible artificial intelligence (AI) practices, including the protection of human rights, transparency, fairness, and accountability. This month, the G7 nations agreed to draft an international AI code of conduct, focusing on drawing voluntary company commitments to prevent harm. Approximately 700 policy instruments are being discussed to regulate AI. A broad agreement exists on the regulatory principles even though there is minimal intervention in the mechanism to realise them. The root of many concerns lies in control, or the potential lack thereof.
As fire once illuminated dark caves, AI now lights up our digital age, redefining progress. According to Stanford’s Artificial Index Report of 2023, private investment in AI has increased 18-fold since 2013, and company adoption has doubled since 2017. McKinsey projects that the annual value of AI could range from $17.1 trillion to $25.6 trillion. AI is on the ascent, with rising capabilities, affordable access, and widespread applications. Its potential is as captivating as the gravity of its risks.
AI presents well-documented challenges in biased models, privacy issues, and opaque decision-making, impacting diverse sectors. Generative AI’s rise risks tarnishing public discourse integrity with misinformation, disinformation, influence operations and personalised persuasion strategies, potentially eroding social trust. As AI begins to weave into the defence frameworks of nation-states, there is a risk that its inexplicable hallucinations and unchecked analyses might trigger unanticipated and unmanageable military escalations.
Within the web of challenges, the possibility of Artificial General Intelligence has been cited as the towering danger. Concerns around rogue yet powerful AI systems, or those hijacked by malicious actors, have risen. The chilling potential for AI to autonomously chart its course, duplicate its capabilities and evolve unchecked has been articulated as a very real possibility in the years ahead.
In 2023, in response to these challenges, global institutions undertook pivotal
interventions. The draft EU AI Act and the US’s voluntary safeguards framework announced in concurrence with seven AI firms are two such interventions.
While acknowledging risks, it would be inadvisable to impede the advancement of AI’s competence or “intelligence”. Our challenges are intricate, and AI offers substantial promise for their solutions. Our ability to address these issues without such technological advancements is limited. Just as Enrico Fermi’s team emphasised the importance of control rods in the development of the first nuclear reactor, our approach to AI should be centred on ensuring it remains under our control.
First, we must establish worldwide consensus regarding the risks of AI. Even a single vulnerability can create avenues for malicious actors to execute far-reaching breaches. It would be prudent to set up an international commission focused on iteratively working towards identifying risks associated with AI.
It is critical to conceptualise standards which must be met by any public AI service. Standards accelerate safety by minimising risks, advance quality, pave the path for private-public partnerships, promote efficiency by eliminating redundancies, and above all, when adopted at an international level, promote inter-operability across regions. For AI, we need to conceive socio-technical standards, which describe ideals and, equally importantly, the technical mechanisms to achieve them. AI, which will iterate as a technology, will need standards that adapt.
Finally, states would need a substantial stake in AI’s design, development, and deployment, currently dominated by a few companies. We must reimagine public-private partnership models and build regulatory sandbox zones wherein experiments for propelling the competitive advantage of entrepreneurs are matched by equitable solutions to social challenges. The recently announced partnership between the UAE Emirate of Ras Al Khaimah and Humans.AI to establish a zone where companies “securely govern and run their AI models with transparency” is one such pilot.
AI’s journey is filled with opportunities and challenges, demanding creativity, humility, and responsibility. While its potential is undeniable, its the future must be tempered with caution, foresight, and above all, control.
The writer is Vice President (Content) at Centre for Trustworthy Technology, a World Economic Forum Fourth Industrial Revolution Centre. Views expressed are personal