In November 2025, the Supreme Court released a ‘White Paper on Artificial Intelligence and Judiciary’, which identified “Fabrication of Cases and Hallucination” as a primary risk associated with the use of AI.
The Supreme Court has termed the reliance of a trial court’s order on non-existent, artificial intelligence (AI)-generated judgments as “misconduct” and signaled its intent to “examine … in more detail” its “consequences and accountability”.
The observation was made by a Bench of Justices P S Narasimha and Alok Aradhe, while hearing a petition challenging an order of the Andhra Pradesh High Court. The apex court noted that the issue raises “considerable institutional concern”, “ … about the process of adjudication and determination”.
While hearing a property dispute, an additional junior civil judge in Vijayawada had appointed an Advocate Commissioner to survey the land concerned and determine if it fell within the boundaries of a specific sale deed. The defendants raised objections to the commissioner’s report, challenging its findings.
In August 2025, the judge dismissed these objections, citing four Supreme Court judgments.
However, none of these judgments were found to exist, as emerged when the order was challenged before the Andhra Pradesh High Court in January 2026.
In a report to the High Court, the additional junior civil judge admitted she had used an AI tool to research case law, saying it was the first time she had done so. The judicial officer said that she believed the answers the tool threw up to be genuine, and admitted she had not verified the citations against authentic legal databases. However, she said, she had no intention to misquote.
While disposing of the civil judge’s order, Justice Ravi Nath Tilhari of the High Court accepted her explanation that the mistake occurred in “good faith”. The High Court held that while the citations were fake, the “principles of law” applied in the order were correct, and hence refused to set aside the lower court’s order only on the grounds of erroneous citations.
However, the Supreme Court, hearing an appeal against the High Court’s order, took a far sterner view, when the case came before it on February 27. Staying the proceedings in the trial court, the Bench declared that a decision based on fake judgments “is not an error in decision making” but “misconduct”, warning that “(a) legal consequence shall follow”.
Not isolated incident
Earlier too, cases have come up of both litigants and authorities relying on what the Supreme Court referred to last week as “AI generated non-existing, fake or synthetic alleged judgments”.
On February 13, 2026, the apex court dismissed a special leave petition after finding that the petitioner had cited non-existent judgments. When questioned by the Supreme Court Bench, the counsel admitted to drafting the petition based on articles found online, without verifying the original judgments.
In January 2026, a Bench of the Bombay High Court imposed a cost of Rs 50,000 on a litigant for citing a fake case in written submissions. Justice M M Sathaye noted that the submission contained “give-away features” of AI generation, such as “green-box tick-marks, bulletpoint-marks, repetitive submissions etc”.
The court remarked that the “dumping” of unverified material must be “nipped at bud”, as it “resulted in waste of precious judicial time”.
In October 2025, another Bench of the Bombay High Court quashed an assessment order passed by the Income Tax Department, holding that it had added over Rs 22 crore to a company’s income based on three “completely non-existent” judicial decisions.
The High Court’s Justices B P Colabawalla and Amit S Jamsandekar observed that while AI is useful, “results thrown open by the system… are not to be blindly relied upon” and “should be duly cross verified”.
In September 2025, a petitioner in the Delhi High Court withdrew their plea after the opposing counsel pointed out that “some of the judicial precedents cited on behalf of petitioner do not even exist, and, in some of the precedents, the quoted portions do not exist”.
Institutional response
In November 2025, the Supreme Court released a ‘White Paper on Artificial Intelligence and Judiciary’, which identified “Fabrication of Cases and Hallucination” as a primary risk associated with the use of AI. It referred to multiple court orders that were found to be based on “fictitious judicial precedents”.
The document noted that AI tools can “hallucinate judgments, citations, quotes, or refer to any legislation that may not be in existence”.
To mitigate this, the White Paper recommended the establishment of AI ethics committees within courts and mandated that “all information obtained through AI tools shall be independently verified”.
In July 2025, the Kerala High Court became the first High Court to issue a formal AI policy for its district judiciary, saying that while the tools could be used for administrative tasks or translation, any output – especially legal citations – must be “meticulously verified”.
It explicitly warned that violations may result in disciplinary action.
Why AI can generate fake case law
Generative AI tools, such as ChatGPT, are not search engines that look through a verified database for facts. Instead, they are predictive engines designed to mimic human language – based on patterns learned from vast amounts of data. When asked for case law, these tools predict what a legal citation would look like – assembling party names, volume numbers and legal journals in a citation format that appears authentic.
Because the AI prioritises linguistic fluency over factual accuracy, it simply determines that such a combination of words is statistically probable in a legal context.
In the Vijayawada case, one of the AI-generated judgements was ‘Subramani v. M. Natarajan (2013) 14 SCC 95’.