Opinion Real AI debate isn’t about technology. It’s about the choices we make

Human Development Report 2025 rejects fatalism and argues that the AI future depends on what societies choose to value, not the jobs it might replace

Artificial intelligence, AIThe HDR recommends building a complementary economy to create opportunities for people and AI to collaborate rather than compete
December 1, 2025 12:09 PM IST First published on: Dec 1, 2025 at 12:09 PM IST

Written by Sushant Kumar

In these moments of tangled emotions of hope and anxiety about AI, the 2025 Human Development Report (HDR) — A Matter of Choice: People and Possibilities in the Age of Artificial Intelligence — makes a promising claim: The trajectory of AI is not a technological inevitability, but a human choice. The report’s core message that AI’s impact will be defined by human decisions about its design, development and deployment offers balance in the overhyped narratives around the technology.

Advertisement

Perhaps one of the most important arguments the HDR makes is how AI acts like a mirror — “reflecting and amplifying the values, structures, and inequalities of the societies that shape it. AI does not act independently of us; it evolves through our decisions and our priorities.” The HDR dispels the myth of technological inevitability. It also sheds light on the fact that if existing injustices and divides in society are not addressed, AI will entrench them further. In this way, the HDR 2025 shifts the conversation on AI from the technology itself to people and how they reinvent themselves in the face of profound changes brought about by AI.

Augmentation, not replacement

The critical discourse on AI often imagines automation sweeping away human labour, efficiency trumping dignity, productivity superseding care. The HDR flips this: What if the question is not which jobs will be lost, but which qualities of humanity will we keep? The HDR reminds us that AI’s real power lies in augmenting human capabilities, not replacing them. AI can beat any human chess player. But we still prefer playing chess with other humans. AI has accelerated music-streaming, yet our hunger for live gatherings has increased. Machines can amplify human potential, not erase it. Two-thirds of US firms are not using AI for worker task replacement but rather product innovation, and only a quarter use AI for task replacement.

The work of care, control and epistemic agency

The report points to three intertwined factors: Agency in work, control over one’s role, and epistemic agency, i.e. the ability to understand and influence how systems work. In a world where firms deploy AI to monitor timing, productivity and output, humans risk becoming fungible assets. But the HDR calls to build a complementary economy — one that recognises human workers at every stage: coder, user, label-trainer, maintainer, ethicist, relational steward. In low- and medium-HDI countries, 70 per cent of respondents expect AI to increase their productivity; two-thirds expect to use it in education, health or work within a year. That presents an opportunity if we shape AI to be a co-steward of humanity, honouring human relations and amplifying our potential.

Advertisement

Human connection and development

Human development isn’t measured merely in output, but in our capacity to live lives we value. The HDR emphasises that AI must serve flourishing. Human connection, reflected in caring for loved ones, community interdependence and relational work, remains vital. But AI risks undermining it if designed without heed to relational dynamics. In India, where vast care-economies exist, AI deployment must not treat care as a cost to be optimised but as a fabric to be honoured, in a way that doesn’t perpetuate inequities. Otherwise, we risk substituting relational depth with algorithmic mimicry.

Hidden labour, invisible inequalities

Who is doing the invisible work behind AI training? The HDR flags the concentration of compute, investment and data-centres in the US, China and the UK. Half of the world’s data centres are in the United States. In contrast, many workers in the Global South perform data-labelling and micro-tasks under precarious conditions, yet their labour remains invisible. For India, this raises questions: When we participate in the AI economy, what roles are we assuming? Are we simply data and labour providers, or meaningful co-designers? If we accept the former, we reinforce global inequalities.

Youth, screen time, and relational casualty

While the HDR offers a hopeful tone, it raises concerns about emergent human-machine relational ruptures — the impact of AI-driven content on teenage cognition, addiction, sleep deprivation, social isolation and body-image anxieties. These are real-world perturbations of relational life in contexts like India with its booming youth population. According to a study cited in the report, 46 per cent of Instagram and 60 per cent of TikTok users say they are willing to pay money for everyone’s account, including their own, to be deactivated for a month. We need to ask: What do we lose when our connections are shaped by algorithmic feeds instead of shared physical presence? How do our natural rhythms of growth and rest change when machines tune them for constant engagement rather than restoration?

Choices, not inevitabilities

If we pull back, the key message of the HDR is: It’s up to human choice. What we want machines to do, and what we want humans to keep doing, are human decisions. When we frame AI as inevitable, we surrender agency. India’s discourse on AI must reflect that agency. Instead of strategising for “AI adoption” at scale, we need to ask: What forms of human relational work do we want to amplify with AI? What roles do we refuse to automate? How do we ensure human epistemic agency in systems that increasingly influence our lives?

Lessons for Indian AI policy

The 2025 HDR invites a shift: AI is not a dragon to be slain or a saviour to be worshipped. It is a mirror of our relational choices. In India, with its rich traditions of interdependence, social care, and relational labour, we can lead a different path, one of augmentation, ethics, and flourishing.

The HDR recommends building a complementary economy to create opportunities for people and AI to collaborate rather than compete. One way to do that is by investing in labour-enhancing AI that augments and complements human work. Second is the need to invest in Small Language Models (SLMs), which train on fewer parameters than Large Language Models (LLMs). These are more suitable for India’s climate goals, having a lower carbon footprint. SLMs also enable digital sovereignty and can be fine-tuned on domain-specific datasets in Indian languages. They reduce algorithmic bias, are easier to audit and interpret, and support low-bandwidth environments like rural and remote parts of the country.

Third, the HDR urges us to address the digital divide. Multidimensionally poor people with lower levels of education significantly lack access to the internet. Digital literacy and responsible AI use should be integrated into school curricula. Investment should prioritise enhanced capabilities such as critical, creative and relational thinking, essential for evaluating AI outputs and navigating a high-choice information environment.

We can invite experiments that refuse the binary of human vs machine and instead explore their entanglement: Multi-stakeholder labs in Indian universities, hackathons that design AI for relational economies, public art that asks citizens to choreograph their algorithmic feeds. We must remember that the future isn’t arriving for us. The future arrives with us. And in that arrival, our choices matter.

The writer, formerly a fellow of the Harvard Kennedy School, is assistant professor, Jindal School of Government and Public Policy

Edition
Install the Express App for
a better experience
Featured
Trending Topics
News
Multimedia
Follow Us
Idea ExchangeShivraj Singh Chouhan: ‘Schemes for women aren’t freebies... they now have financial freedom'
X