The Kerala High Court last week issued a policy document titled ‘Policy Regarding the Use of AI Tools in District Judiciary’ for “responsible use” of artificial intelligence (AI) in judicial work. The policy aims to reduce dependence on AI in the judiciary by limiting its use to administrative tasks.
This is the first time that a High Court in India has tried to frame principles and guidelines for using AI in the judiciary.
What does the policy cover?
The document focuses on four key principles: transparency, fairness, accountability, and the protection of confidential data.
The guidelines apply to all members of the district judiciary, including judges, clerks, interns, court staff, and other employees who are involved in judicial work. They apply regardless of whether AI tools — softwares that use AI algorithms to perform different tasks such as problem-solving — are used on personal or government devices.
The document provides a separate definition for Generative AI tools such as ChatGPT and DeepSeek, saying they produce human-like responses to prompts that have been entered by the user.
The policy also differentiates between “general” AI tools and “approved” AI tools. Only an AI tool approved by the Kerala High Court or the Supreme Court can be used for court-related work.
The guidelines set clear limits on the usage of AI tools. Writing, drafting legal judgements, orders, or findings is strictly prohibited.
Translating documents by using AI tools without the verification of a judge or a qualified translator is also not allowed.
Using AI for research work like looking up citations or judgements should be verified by an appointed person.
The document encourages the use of AI tools for administrative tasks like “scheduling of cases or court management”. However, it must be done within the observation of a person, and should be duly recorded.
Errors in the tools, if any, must be reported to the Principal District Court or the Principal District Judge and forwarded to the IT department of the High Court. Judicial officers and staff are required to attend training sessions that cover the ethos and technical issues involving the use of court-related work.
The document specifies that violation of any rule will automatically lead to disciplinary action.
Why is the policy relevant?
In February 2025, the Centre, in a press note, encouraged the use of AI in judicial work to help alleviate the backlog of cases and improve the speed of justice administration. Since then, several discussions have taken place regarding the risks and safeguards that such a move would require.
On July 17, the Karnataka High Court, while hearing a petition on X Corp’s challenge to the Centre’s orders to block content under Section 79 of the IT Act, through Sahyog portal, discussed the usage of AI algorithms in moderating content on online platforms.
Solicitor General of India Tushar Mehta noted that “there are instances where the lawyers start using AI for the purpose of research and artificial intelligence, as an inbuilt difficulty, it hallucinates.” AI hallucination is a blanket term for various types of mistakes made by chatbots in response to the facts inserted as a prompt.
Justice M Nagaprasanna said, “Too much dependence will destroy the profession…I keep saying dependency on Artificial Intelligence should not make your intelligence artificial.”
In 2023, the Punjab and Haryana High Court took the assistance of ChatGPT to understand the global view on bail for an accused with a history of violence, including an attempt to murder.
Justice Anoop Chitkara denied bail, seeking AI insights on global bail jurisprudence. He inserted the question in ChatGPT, “What is the jurisprudence on bail when the assailants are assaulted with cruelty?”
However, the court said, “Any reference to ChatGPT and any observation made hereinabove is neither an expression of opinion on the merits of the case nor shall the trial Court advert to these comments. This reference is only intended to present a broader picture on bail jurisprudence, where cruelty is a factor.”