In a calculated pivot that reshapes one of the tech world’s most significant legal battles about AI training, Getty Images has dropped its primary copyright infringement claims against Stability AI in London’s High Court. The move dramatically narrows the scope of the landmark UK lawsuit, steering the case away from a direct challenge to the legality of AI training itself and toward more nuanced questions of trademark and secondary copyright infringement.
This tactical shift does not end the confrontation but rather reframes it. Initially positioned as a “day of reckoning” for AI developers, the lawsuit will no longer focus on whether Stability AI’s training of its Stable Diffusion model on millions of Getty’s images was inherently illegal. The new development signals a potential recalibration of strategy in the broader war between content creators and AI firms, coming just a day after a U.S. judge delivered a seismic ruling in a similar dispute involving the AI company Anthropic. In response to the change, a spokesperson for Stability AI said the company was pleased with Getty’s decision to drop multiple claims.
While the core training and output claims have been withdrawn, the fight continues on two key fronts. Getty is pursuing a secondary infringement claim, which posits that the AI model itself is an “infringing article” illegally imported into the UK. The second front is a trademark claim centered on the appearance of Getty’s iconic watermark on some AI-generated images. Meanwhile, Getty’s parallel and far larger lawsuit in the United States, which seeks up to $1.7 billion in damages, remains completely unaffected.
A Strategic Retreat or a Sharpened Legal Spear?
When the trial began, is was dominated by a confrontational tone, with Getty’s lawyers arguing for the “straightforward enforcement of intellectual property rights.” The decision to now abandon those central claims represents a stark departure. According to Getty’s closing arguments, this was a “pragmatic decision” made after reviewing witness and expert testimony that it said was lacking from Stability AI.
Legal experts, however, suggest the move may reflect the immense difficulty of winning on the primary copyright claims under current UK law. Getty likely faced challenges in establishing a sufficient link between the AI training acts and UK jurisdiction. The focus now shifts to the secondary infringement theory, which has the widest relevance for AI companies that train their models outside the UK.
For its part, Stability AI has argued the trademark claims will fail because consumers do not interpret the watermarks as a commercial message from the company. The abrupt narrowing of the case has left some observers wanting more. The new development will likely frustrate those on both sides of the debate, who were hoping that the outcome of the trial might bring some clarity to the very issues which have now been dropped.
The Anthropic Precedent: A Bright Line Between Training and Theft
As the Getty case pivots in London, a landmark decision for Anthropic in a California federal court is sending shockwaves through the industry by drawing a sharp new line in the sand. In a summary judgment order, Judge William Alsup ruled that the act of training an AI model on copyrighted books constitutes a “transformative” fair use, a major victory for AI developers.
However, that victory came with a monumental catch: the judge ruled that this protection does not extend to the methods used to acquire the training data. The court found that Anthropic must face a high-stakes trial for building its dataset from pirated online libraries. Internal communications revealed that company executives preferred using pirated books to avoid the legal/practice/business slog of licensing.
The judge was unsparing in his assessment of that logic: “That rationale cannot be squared with the Copyright Act.” This creates a crucial legal distinction between the application of AI and the acquisition of data. As Judge Alsup declared, “We will have a trial on the pirated copies used to create Anthropic’s central library and the resulting damages,”
This split decision was met with fierce opposition from creator groups. In a response from The Authors Guild, the organization argued the ruling “contradicts established copyright precedent” and “ignores the harm caused to authors” from market saturation by AI-generated content that directly competes with their work.
A Widening Copyright War on Multiple Fronts
The Getty and Anthropic cases are key fronts in a global conflict that now spans nearly every creative industry. The legal theories being tested are setting precedents for disputes involving authors, artists, and musicians. In one such example, a now settled lawsuit filed by major music publishers alleged that Anthropic unlawfully used copyrighted song lyrics to train its Claude AI.
This complex legal environment highlights the dual-track strategy many content holders are adopting. Getty Images itself is not opposed to artificial intelligence; in fact, it has launched its own generative AI offering that was trained exclusively on its own licensed content and compensates the contributing artists. This approach frames its legal fight not as a Luddite rejection of technology, but as a battle for control and compensation. In 2023, the company asserted its belief that Stability AI “chose to ignore viable licensing options and long-standing legal protections in pursuit of their stand-alone commercial interests.”
The recent change in the case suggest the central question in the AI copyright wars is evolving. The industry is moving past the broad debate over whether AI training is fair use and into a more granular, and perhaps more perilous, examination of the data supply chain. The era of “scrape first, ask questions later” appears to be definitively over. For AI companies, proving clean data lineage is no longer a matter of ethics but of immense legal and financial liability, marking a new and decisive battleground in the fight to define the future of creativity.