skip to content
Premium
Premium

Opinion How AI in criminal justice could spark a human rights crisis for the marginalised

The databases currently used in AI development do not reflect the complexities of Indian society. While AI has many exciting possibilities, it also raises serious ethical and social questions that must be addressed before it is adopted widely

Representative imageThe central government in India is increasingly deploying tech-based tools in the criminal justice system. Notable initiatives include the Crime and Criminal Tracking Network and Systems (CCTNS) and the Inter-operable Criminal Justice System (ICJS).
June 16, 2025 04:27 PM IST First published on: Jun 16, 2025 at 04:27 PM IST

Written by Roshan Pandey and Chetna Trivedi

We are living in the era commonly referred to as Industry 4.0, where artificial intelligence and information technology are becoming part of almost every aspect of life. From education and healthcare to agriculture and criminal justice, governments around the world, including India, are using AI-based technologies with the claim of improving governance, efficiency and transparency. The purpose of technology, ideally, is to improve human life. But an important question remains: Who controls this technology?

Advertisement

History reminds us that technological advancement does not automatically translate into better socio-economic conditions. For instance, when the steam engine was invented in the 18th century, it sparked a massive increase in industrial productivity. Logically, this should have reduced working hours and given people more leisure time. On the contrary, working conditions for labourers worsened. This contradiction occurred because the benefits of technology are controlled by the corporate class, who use it to extract labour without improving the workers’ well-being. The same pattern could repeat itself with AI, if we fail to examine “who” shapes and benefits from its development.

AI and society: Shaped by bias

AI in its current form reflects existing social biases — caste, gender, religion, skin colour, language, and region. A 2019 Guardian report found women are 47 per cent more likely to be seriously injured and 17 per cent more likely to die in car crashes, as cars and their safety systems are designed for male bodies. Sociologists Donald MacKenzie and Judy Wajcman introduced the concept of the “social shaping of technology (SST)” in the 1980s. The SST argues that technology is not neutral. It is influenced by the social, political, and economic environments in which it is developed. A 2018 MIT study found facial recognition systems had a 0.8 per cent error rate for White men, but 34.7 per cent for dark-skinned women.

UN Women warns AI can amplify stereotypes across areas like hiring and healthcare. Studies show AI is not neutral and biased data harms marginalised groups. In criminal justice, this could trigger serious human rights issues.

Who represents India’s database?

Advertisement

In India, there is no specific legal framework to regulate the use of AI. Yet tools are being widely adopted by both government and private institutions. Large language models and generative AI systems operate on existing datasets, but these datasets themselves are shaped by unequal access. The CSDS 2019 report pointed out a deep digital divide in India. A significant part of the population, especially women, Dalits, Adivasis, minorities, and rural communities — has limited or no access to the Internet.

Oxfam’s India Inequality Report 2022 found that women use the Internet 33 per cent less than men. Only 31 per cent of rural Indians are online, compared to 67 per cent in urban areas. The situation is even more skewed when it comes to caste, with “upper” caste people having far more representation in digital spaces than Dalits and Adivasis. This means that the data on which AI systems in India are trained does not fairly represent the country’s social reality. As a result, AI tools built on such biased datasets risk reinforcing existing inequalities.

AI in criminal justice: A threat to the marginalised?

The NCRB’s Prison Statistics 2018 reveals deep inequalities in India’s justice system: Two-thirds of prisoners are Dalits, Adivasis, or OBCs, 19 per cent are Muslim, and 66 per cent are illiterate or have not studied beyond class X. NCRB 2015 data shows over 55 per cent of undertrials come from Dalit, Adivasi, or Muslim backgrounds. This raises concerns about using AI in criminal justice, as biased data can reinforce existing discrimination. Recent examples highlight these risks. The Punjab and Haryana High Court used ChatGPT in a bail rejection for a murder case, setting a troubling precedent. Amazon’s 2014 AI hiring tool, scrapped for bias against women, shows how AI can replicate past injustices. In India, where power in government, corporates, and media is concentrated among the privileged, AI in recruitment and policing without ethical safeguards could further marginalise vulnerable communities.

Experts often argue that developed countries possess more advanced AI systems to control crime, whereas developing nations lack the tangible and human resources needed to use such technologies effectively. However, even in the United States, AI-driven systems have demonstrated significant flaws. For instance, the COMPAS algorithm used in the criminal justice system has been shown to be twice as likely to assign higher risk scores to Black defendants compared to White defendants charged with similar offenses. Similarly, the NAACP has raised concerns on the Chicago crime prediction algorithm and its inherent systemic bias against Black communities. These examples highlight the misuse of AI-based technologies that disproportionately affect marginalised groups and violate their human rights.

The central government in India is increasingly deploying tech-based tools in the criminal justice system. Notable initiatives include the Crime and Criminal Tracking Network and Systems (CCTNS), a nationwide database of crimes and offenders; the Inter-operable Criminal Justice System (ICJS), which enables information sharing among various state institutions; and the National Automated Facial Recognition System (AFRS), designed to assist in identifying suspects and missing persons. However, the use of such systems in the absence of a well-defined legal and regulatory framework raises concerns. These measures may reinforce systemic bias and lead to further marginalisation of communities that already lack adequate representation and voice within institutional records.

The need for inclusive and ethical technology

India holds enormous diversity but also deep inequalities. The databases currently used in AI development do not reflect the complexities of Indian society. While AI has many exciting possibilities, it also raises serious ethical and social questions that must be addressed before it is adopted widely. Therefore, the government must enact regulations to ensure AI remains inclusive, transparent, and accountable. Technology must not reinforce existing stigmas and exclusion and rather become a tool for empowerment. AI’s development should be centred around the needs of the marginalised communities to ensure equitable progress.

Pandey is PhD Scholar, BHU and Trivedi is assistant professor, Amity University

Latest Comment
Post Comment
Read Comments
Edition
Install the Express App for
a better experience
Featured
Trending Topics
News
Multimedia
Follow Us