AI company Anthropic settles copyright lawsuit with book authors: What was the case?
The settlement is the largest payout in the history of US copyright cases. Anthropic, which is behind the Claude family of large language models (LLMs), will pay $3,000 per work to 500,000 authors

The artificial intelligence (AI) company Anthropic has agreed to pay $1.5 billion to settle a lawsuit by a group of authors and publishers who had alleged that the company used pirated versions of their books to train its chatbots.
The settlement is the largest payout in the history of US copyright cases. Anthropic, which is behind the Claude family of large language models (LLMs), will pay $3,000 per work to 500,000 authors.
Justin Nelson, a lawyer for the authors, told The Guardian, “As best as we can tell, it’s the largest copyright recovery ever… It is the first of its kind in the AI era.”
Here is a look at the case and the significance of the settlement.
But first, why are books required to train AI models?
AI models such as ChatGPT and Claude are designed to understand and generate coherent and context-relevant text like a human. They are able to do so by being trained on vast amounts of data. This training data is usually scraped from the Internet, books, and articles.
However, some of the data used to train the AI models is copyrighted. As a result, numerous writers, music labels, and news agencies, among others, have filed lawsuits in the US, claiming that tech companies “stole” their work to train their AI models.
The tech companies, on the other hand, have argued that they are using the data to create “transformative” AI models, which falls within the ambit of “fair use” — a concept in law that permits use of copyrighted material in limited capacities for larger public interests (for instance, quoting a paragraph from a book for a review).
What was the case against Anthropic?
In August 2024, journalist-writers Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson filed a class action complaint — a case that represents a large group that could be/were similarly harmed — against Anthropic.
The petitioners argued that Anthropic downloaded pirated versions of their works, made copies of them, and “fed these pirated copies into its models”. They said that Anthropic had “not compensated the authors”, and “compromised their ability to make a living as the LLMs allow anyone to generate — automatically and freely (or very cheaply) — texts that writers would otherwise be paid to create and sell”.
On June 23, 2025, Judge William Alsup of the District Court in the Northern District of California ruled that Anthropic’s use of copyrighted data was “fair use”, centring his arguments around the “transformative” potential of AI.
Alsup wrote: “Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different. If this training process reasonably required making copies within the LLM or otherwise, those copies were engaged in a transformative use.”
However, Judge Alsup’s ruling also found that Anthropic had downloaded more than 7 million digitised books that it “knew had been pirated”. The company downloaded and used Books3 — an online shadow library of pirated books with about seven million copies — to train its models.
Anthropic subsequently took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, according to the judge.
Why is the settlement significant?
As the lawsuit was settled instead of going to trial, it will not set a legal precedent. However, experts suggest that the settlement could be a turning point in the other lawsuits between tech companies and copyright holders. They say that it could pave the way for these companies to pay rights holders through licensing fees.
Chad Hummel, a trial lawyer who was not involved in the case, told The New York Times, “This is massive… This will cause generative AI companies to sit up and take notice.”
Photos






- 01
- 02
- 03
- 04
- 05