AI bias in legal system: The Punjab and Haryana High Court recently warned against the disruption of judicial proceedings with the “repeated use” of cell phones by lawyers to “update themselves” through artificial intelligence (AI), among other things.
“President/Secretary of the Bar Association may apprise the worthy members not to compel the Court to pass any harsh order on account of repeated use of mobile phones during the course of hearing to update themselves through artificial intelligence/online platforms/Google information,” the judge said.
AI has been mentioned in judicial orders or discussed among judges for some time.
Courts and judges have repeatedly advocated for the integration of technology in the judiciary — albeit with considerable caution.
Most recently, CJI BR Gavai (top court judge at the relevant time) flagged concerns on the integration of AI in the judiciary and said it should serve as an aid rather than a replacement for human judgment.
Referring to the issue of incorrect citations during judicial proceedings, he said, “This has led to situations where lawyers and researchers…have unknowingly cited non-existent cases or misleading legal precedents…”
“You should never, ever rely on AI as a primary source,” cautions technology lawyer Nikhil Narendran, referring to wrong citations popping up when searched on AI platforms.
He adds, “One of the first things that you’re taught as a law student is that you should rely on primary source and not on secondary source. Many of the current LLMs ‘s (long language model) not only invent case laws, also suggest case laws which exist, but which does not have the ratio that one is looking for it.
The reason behind it, he says, is the insufficient training when it comes to Indian case laws and its unique taxonomy and ontology.
“It’s an ongoing problem and some of us are working in it,” Narendran adds.
In another instance, former CJI DY Chandrachud called for adopting AI after capacity building and training to ensure its “ethical and effective utilisation”.
While he outlined the scope “opportuinities”, the former CJI underscored the “complex challenges” the scenario entails.
“While AI presents unprecedented opportunities, it also raises complex challenges, particularly concerning ethics, accountability, and bias. Addressing these challenges requires a concerted effort from stakeholders worldwide, transcending geographical and institutional boundaries,” he said.
Similarly, Justice Surya Kant, in line to be the next CJI, recently stressed on using techonology “thoughtfully and inclusively”.
The judge said, “artificial intelligence algorithms must be designed to flag, not exacerbate, the biases so entrenched in society”.
In March 2023, Justice Anoop Chitkara of the Punjab and Haryana High Court used ChatGPT to deny the bail plea of a certain Jaswinder Singh, accused of assaulting an individual, and causing his death. Justice Chitkara found that there was an element of “cruelty” to the assault — a ground which can be used to deny bail.
The judge then posed a question to ChatGPT: “What is the jurisprudence on bail when the assailants are assaulted with cruelty?”
The order contained the AI chatbot’s response which included that “the judge may be less inclined to grant bail or may set the bail amount very high to ensure that the defendant appears in court and does not pose a risk to public safety.”
Justice Chitkara clarified that such a measure was “only intended to present a broader picture on bail jurisprudence, where cruelty is a factor.”
A couple of years ago, Delhi High Court’s Justice Prathiba M Singh in a trademark violation case said the accuracy and reliability of AI-generated data was “still in the grey area and at best, such a tool can be utilised for a preliminary understanding or for preliminary research”.
She was hearing a lawsuit by luxury brand Christian Louboutin against a partnership firm involved in the manufacture and sale of shoes allegedly in violation of its trademark and the plaintiff’s lawyer had placed responses by ChatGPT with respect to its “reputation”.
Narendran explains “the judiciary has been cautiously optimistic when it comes to Gen AI. This is the correct approach as – if the bar is set high for adoption of new technology in judiciary, it may lead to perverse outcomes since technology awareness is uniform across the bar and the bench”.
He, however, says: “Let’s use the technology for augmentation of human capacity, where the professional is at the center of service delivery to the clients and advocacy in front of the judge as opposed to the technology being at the center of it. This model allows for gradual and incremental gains leading to long term efficiency gains.”