Vigilante lawyers expose the rising tide of AI slop in court filings

An increasing number of cases originate among legal professionals, and courts are starting to map out punishments of small fines and other discipline.

More lawyers are using artificial intelligence to write legal briefs. Some vigilantes are publicizing the A.I.-generated errors. (Image: The New York Times)More lawyers are using artificial intelligence to write legal briefs. Some vigilantes are publicizing the A.I.-generated errors. (Image: The New York Times)

Earlier this year, a lawyer filed a motion in a Texas bankruptcy court that cited a 1985 case called Brasher v. Stewart.

Only the case doesn’t exist. Artificial intelligence had concocted that citation, along with 31 others. A judge blasted the lawyer in an opinion, referring him to the state bar’s disciplinary committee and mandating six hours of AI training.

That filing was spotted by Robert Freund, a Los Angeles-based lawyer, who fed it to an online database that tracks legal AI misuse globally.

Story continues below this ad

Freund is part of a growing network of lawyers who track down AI abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the AI slop, it can help draw attention to the problem and put an end to it.

While judges and bar associations generally agree that it’s fine for lawyers to use chatbots for research, they must still ensure their filings are accurate.

But as the technology has taken off, so has misuse. Chatbots frequently make things up, and judges are finding more and more fake case law citations, which are then rounded up by the legal vigilantes.

“These cases are damaging the reputation of the bar,” said Stephen Gillers, an ethics professor at New York University School of Law. “Lawyers everywhere should be ashamed of what members of their profession are doing.”

Story continues below this ad

Since the introduction of ChatGPT in 2022, professionals in fields from medicine to engineering to marketing have wrestled with how and when to use chatbots. Many companies are experimenting with the technology, which can come tailored for workplace use.

For lawyers, a federal judge in New York helped set the standard when he wrote in 2023 that “there is nothing inherently improper” about using AI, although they must check its work. The American Bar Association agreed, adding that lawyers “have a duty of competence.”

Still, according to court filings and interviews with lawyers and scholars, the legal profession in recent months has increasingly become a hotbed for AI blunders. Some of those stem from people’s use of chatbots in lieu of hiring a lawyer. Chatbots, for all their pitfalls, can help those representing themselves “speak in a language that judges will understand,” said Jesse Schaefer, a North Carolina-based lawyer who contributes cases to the same database as Freund.

But an increasing number of cases originate among legal professionals, and courts are starting to map out punishments of small fines and other discipline.

The problem, though, keeps getting worse.

Story continues below this ad

That’s why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it.

Initially he found three or four examples a month. Now he often receives that many in a day.

Many lawyers, including Freund and Schaefer, have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like “artificial intelligence,” “fabricated cases” and “nonexistent cases.”

Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges’ opinions scolding lawyers.

Story continues below this ad

Peter Henderson, a Princeton computer science professor who started his own AI legal misuse database, said his lab was working on ways to find fake citations directly rather than relying on hit-or-miss keyword searches.

The lawyers say they don’t intend to shame or harass their peers. Charlotin said he avoided prominently displaying the offenders’ names for that reason.

But Freund said a benefit of a public catalog was that anyone could see whom they “might want to avoid.”

And in most cases, Charlotin added, “the attorneys are not very good.”

Story continues below this ad

Eugene Volokh, a law professor at UCLA, blogs about AI misuse on The Volokh Conspiracy. He has written about the issue more than 70 times, and contributes to Charlotin’s database.

“I like sharing with my readers little stories like this,” Volokh said, “stories of human folly.”

One involved Tyrone Blackburn, a New York lawyer focusing on employment and discrimination, who used AI to write legal briefs that contained numerous hallucinations.

At first he thought the defense’s allegations were bogus, Blackburn said in an interview. “It was an oversight on my part,” he said.

Story continues below this ad

He eventually admitted to the errors and was fined $5,000 by the judge.

Blackburn said he had been using a new legal AI tool and hadn’t realized it could fabricate cases. His client, who he was representing for free, fired him and filed a complaint with the bar, Blackburn added.

(In an unrelated matter, a New York grand jury indicted Blackburn last month on allegations he rammed his car into a man trying to serve him legal documents. Attempts to reach Blackburn for additional comment failed.)

Court-ordered penalties “are not having a deterrent effect,” said Freund, who has publicly flagged more than four dozen examples this year. “The proof is that it continues to happen.”

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement