Opinion Law, lies, and AI: When technology goes too far in the courtroom, everybody pays for it

A petition filled with fabricated citations reveals the dangers of using AI without due diligence

AI toolsTo ignore AI or to not harness its potential is a practically impossible proposition and certainly not a wise one. Nonetheless, it is unequivocally important that we proceed with utmost cautious optimism (file)
October 1, 2025 03:00 PM IST First published on: Oct 1, 2025 at 02:59 PM IST

Written by Vishavjeet Chaudhary 

Recently, a petition filed before the Delhi High Court was dismissed as withdrawn. The reason for this was rather unusual — the petition had cited non-existent case laws, fancifully made-up propositions of law as precedent and presented blatantly wrong positions of law. This was flagged by the respondent lawyers, and the errors were painfully obvious. For instance, paragraph 73 of a judgment was cited as precedent when the judgment only has 27 paragraphs. A case that was cited did not exist at all. Some paragraphs attributed to previous cases were found to be incorrect. It was noted that “appropriate steps” would be taken in this regard.

Advertisement

The incident, however, raises some pertinent questions that we must tackle head-on and highlights a much bigger, global issue: The use of artificial intelligence. Some obvious points aside – for instance, it remains the duty of the counsel to ensure the drafting is impeccable and without mistakes, let alone falsehoods — the issue is a much larger one. Whilst some minor typographical errors may make their way into court documents inadvertently, flagrant and intentional misleading propositions have no place in the system.

It must be emphasised that perhaps for the first time in a generation, a disruption as imminent and as global as AI has come in. The pandemic certainly expedited its innovation, usage and adoption in various roles. The potential is also huge. It can be used to solve issues where previous systems failed. Scalability is one. Rights, duties and obligations can be transmitted to the masses. Translation, record keeping, note taking and information dispensing can be done effectively and efficiently. Pro bono legal assistance can be provided at a larger scale and in a dispersed and diversified manner. Yet, challenges remain.

A plethora of reports suggest that the legal profession will be one of the most disrupted sectors owing to the advent of AI tools and systems. The impact is already clear. There have been huge job losses so far as drafting and other such tasks are concerned. Lawyers are increasingly being told what propositions of law are to be used, thanks to answers retrieved from AI tools. Law relies on rules, exceptions to those rules, exceptions to those exceptions and then other related rules. The study of law, its application to facts of each case and research is a targeted and complicated matrix that involves rigorous intellectual investment. Whilst AI tools may be excellent at retrieving some data, there is much to be desired in the analysis.

Advertisement

Additionally, tools are only as good as the data that can be picked up, hence there is always the risk of false/incomplete/incorrect data having a domino impact on the ultimate answers. Thus, any reliance on AI must be with structured understanding and nuanced sieving of the information. Even one piece of information could tip the balance, and inherent algorithm bias can mean those will sometimes be missed in AI tools.

AI has certainly done great service on several parameters. Mundane tasks, data collection and comparison, finding information can all be delegated, of course with an oversight of the process. AI can also judge and identify patterns fairly reliably. However, that is where the problem comes in — AI reinforces what it already knows. This has the risk and inevitable issue of reimposing and re-emphasising biases. Law in our systems follows precedent. Whilst predictability is the hallmark of this doctrine, it also presupposes (and demands) judicial imagination and invention. Context, political shifts, change in belief all become part of judicial work and decision making. AI is inherently and unfortunately woefully unprepared for that. Patterns and decisions that have already been made and arguments that have already been scrutinised will most likely conclude into decisions that have already been made. Law certainly cannot afford to be this stagnant and stuck in its own relics.

Law is an inherently creative practice. It relies on bare information as much as it does on human emotions, passions, imagination. Cross-examination, for instance, allows for body language to be observed. Drafting requires the ability to be concise, persuasive and accurate. Criminal trials require the facts to be proven beyond reasonable doubt – unfortunately, the use of AI leaves more than reasonable doubt. In the legal field, there is no scope for that — many times the questions will be of rights and obligations, of duties and powers, and sometimes of life and death.

To ignore AI or to not harness its potential is a practically impossible proposition and certainly not a wise one. Nonetheless, it is unequivocally important that we proceed with utmost cautious optimism. Whilst doing so, we reassess structures that exist and paths that have been covered. We use AI but with caveats of reliability. We dissect and look at bias — doing the double task of correcting both simultaneously. It is a time of great reflection, of embracing the new, but with studied scepticism and acquired skill. It is time to turn the gyroscope and see.

The writer is a Delhi-based advocate

Edition
Install the Express App for
a better experience
Featured
Trending Topics
News
Multimedia
Follow Us
Shashi Tharoor writesWhy Indian-Americans are silent — and its costs
X