A federal judge certified a class action lawsuit against AI firm Anthropic on Thursday, dramatically raising the financial stakes in a landmark copyright battle. The ruling from the Northern District of California allows authors nationwide to collectively sue the company for allegedly using pirated books to train its Claude AI model.
This decision lands just days after Anthropic sought to appeal a related June order, citing a deep and confusing split between two federal judges on whether using copyrighted material for AI training constitutes fair use. The conflicting rulings have created profound legal uncertainty for the entire AI industry.
The certification, which The Authors Guild celebrated as a “critical step”, means Anthropic now faces a unified front. Instead of fighting individual authors, it must defend against a collective representing potentially hundreds of thousands of writers whose works were allegedly downloaded from pirate libraries.
Class Action Certified, Dramatically Raising Financial Stakes
The ruling by U.S. District Judge William Alsup on July 17 transforms the legal landscape for Anthropic. It consolidates numerous individual claims from authors like Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson into a single, powerful lawsuit.
This move significantly increases the potential for massive statutory damages. With damages potentially reaching $150,000 per infringed work, the collective nature of the suit could expose Anthropic to billions in liability if the authors’ case is successful.
The lawsuit, originally filed in August 2024, accuses the Amazon- and Google-backed startup of building its powerful Claude AI models on a foundation of stolen intellectual property from sources like LibGen and Books3.
A Tale of Two Judges: Conflicting Rulings Create Legal Chaos
The core of the industry’s turmoil stems from two contradictory rulings from the same federal court. On June 23, Judge Alsup issued a split decision in the Anthropic case. He found the act of training an AI model on books was a “quintessentially transformative” fair use.
Judge Alsup hailed the innovation, stating, “The technology at issue was among the most transformative many of us will see in our lifetimes.” However, he drew a hard line on data sourcing. He ruled that the fair use defense does not excuse the initial act of piracy and ordered a trial on that specific issue. In his order, he declared, “We will have a trial on the pirated copies used to create Anthropic’s central library and the resulting damages.”
Just two days later, in a parallel case against Meta, Judge Vince Chhabria issued a stunningly different opinion. He directly criticized Judge Alsup’s logic, arguing that one cannot separate the data acquisition from its ultimate purpose. He wrote that Judge Alsup “brushed aside concerns about the harm it can inflict on the market for the works it gets trained on,” creating a direct judicial split.
Anthropic Seeks Higher Court Appeal Amid Uncertainty
Caught between these opposing views, Anthropic filed a motion on July 14 seeking an interlocutory appeal. This rare legal maneuver allows a party to appeal a ruling before a final judgment, highlighting the severity of the situation. The company argued that proceeding to trial is impossible when the fundamental legal rules are in dispute.
In its motion, Anthropic stated, “This Court should obtain guidance from the Ninth Circuit on the issue now instead of holding a trial that may need to be redone under a different legal framework—or may not be necessary at all.” The company believes the conflicting precedents must be resolved by the Ninth Circuit Court of Appeals.
The company’s lawyers emphasized the need for clarity, writing, “It is important that the Ninth Circuit resolve this disagreement now so that the correct legal framework governs pending and future copyright challenges to generative AI technology.” This appeal now hangs in the balance as the newly certified class action lawsuit moves forward.
The New Legal Frontline: Separating Data Acquisition from AI Application
This legal battle is forging a new and critical distinction in copyright law: separating the application of data from its acquisition. Judge Alsup’s ruling suggests that while the final AI product may be transformative, its “magic” does not sanitize the original sin of using pirated materials.
Conversely, Judge Chhabria’s ruling in the Meta case suggests a more holistic view. He argued that if the ultimate use is transformative, the means of acquiring the data are part of that protected use. He noted, “The whole point of fair use analysis is to determine whether a given act of copying was unlawful,” framing the entire process as a single act to be analyzed.
This fundamental disagreement is the central question facing the AI industry. The outcome of Anthropic’s case, and its potential appeal, will set a precedent that could either shield AI companies under a broad fair use doctrine or expose them to billions in damages for their data sourcing practices. The era of “scrape first, ask questions later” appears to be definitively over.