Ongoing legal cases are setting precedent, but demand clarity
A review of the legal challenges associated with generative AI training disputes emphasises the need for clarity from the UK government, legislature and courts.
The need for greater legal clarity on how tech companies are able to use content in the training of generative AI models has been hotly debated (and challenged) for many years now.
In recent months, we have seen a string of examples of rightsholders seeking to challenge the training activities of generative AI companies. In the US, Reddit accused Anthropic of training its Claude AI chatbot using Reddit user comments (which Reddit says were scraped without permission). In the UK, the BBC accused Perplexity of training its Perplexity AI chatbot using BBC content (which the broadcaster also says was scraped without permission).
In June, Getty Images and Stability AI locked horns in the English High Court during a trial linked to Getty’s allegations that Stability AI has trained its Stable Diffusion model using images scraped without permission from Getty’s websites. Getty is bringing parallel proceedings against Stability AI in the US.
The UK Government’s copyright and AI consultation closed in February and, after the Data (Use and Access) Bill proceeded to Royal Assent last month without the inclusion of any copyright or AI transparency provisions, the UK government has promised to publish a report on its copyright and AI proposals by mid-March 2026 (with an interim progress report promised by mid-December).
Although these developments suggest that legal clarity may be drawing nearer, we are yet to see evidence that this is the case. If anything, recent developments have emphasised that the legal challenges facing those rightsholders navigating claims against generative AI companies are as significant now as they have ever been.
The challenge of identifying the appropriate legal basis (or bases) of a claim
Commentary often conflates the “rightsholders vs generative AI companies” debate with the “copyright vs AI” debate, when in fact the latter is only one (albeit the predominant) aspect of the former.
Although some claims brought against generative AI companies focus solely or primarily on allegations of copyright infringement (i.e. allegations of unauthorised copying of content during the scraping and ingestion stages of AI training), this is not the case for all.
While Getty has accused Stability AI of copyright infringement (including infringement by virtue of importing an infringing article into the UK), it has also raised accusations of other types of intellectual property infringement, such as trademark infringement. The BBC, in addition to accusing Perplexity of copyright infringement, has alleged that Perplexity’s actions constitute a breach of the BBC’s terms of use. Reddit’s US lawsuit against Anthropic focuses on multiple causes of action, none of which are centred around copyright infringement. Rather, the first cause of action, being breach of contract, alleges that Anthropic has scraped and subsequently used Reddit forum content in breach of Reddit’s online user agreement.
In some instances, a claimant may not be the owner of any copyright in the content being scraped from their website. In other instances, a copyright owner may see the complexity and uncertainty associated with copyright infringement claims in the context of AI training as a sufficient reason for focusing their efforts and resources on other non-copyright related bases of claim.
Clearly then, even identifying the appropriate legal bases of a claim can be far from straightforward.
The evidential challenge
In the UK, any claimant accusing a generative AI company of scraping and ingesting its content for AI training purposes without permission must substantiate its accusations with evidence, and significant amounts of it. This is easier said than done.
Obtaining sufficient technical data to prove that a particular AI company has scraped content from a website can be challenging. Often then, rightsholders look to the output generated by generative AI models for clues that might suggest that their content may have been used during the AI training process.
As an example, Getty has argued that Stable Diffusion’s output bearing the Getty Images watermark is evidence that Stable Diffusion has been trained using images scraped without permission. The BBC has stated that output generated by the Perplexity AI chatbot reproduces its content verbatim, while Reddit asserts that output generated by the Claude AI chatbot makes references to Reddit communities and topics in a way that could only be possible if trained on Reddit content.
A significant amount of time and effort can be required to collate evidence of sufficient quality and quantity. Given the nature and scale of generative AI, it is very difficult to prove that specific content has been ingested and used to create output responses to user prompts.
In the UK, this evidential burden could be eased if the legislature follows the EU’s lead and imposes transparency obligations on generative AI companies to publish a sufficiently detailed summary of the content used for AI training purposes. We wait to see whether legislative changes of this nature would make it through Parliament unscathed.
The jurisdictional challenge
Copyright laws (and, where enacted, AI laws) differ from one jurisdiction to the next. Consequently, identifying exactly where AI training activities (and therefore, any alleged infringing acts) have taken place is crucial to determining which territory’s laws will apply.
However, if proving that content has been scraped and ingested for AI training purposes sounds challenging, obtaining evidence that the training has taken place in a particular territory can be even harder.
This is the reality Getty has faced in its UK proceedings against Stability AI. During closing arguments, Getty dropped part of its copyright infringement claim due to issues proving that AI training had actually taken place in the UK (and therefore engaged applicable UK copyright laws). Consequently, on the topic of generative AI training activities, the focus now turns to the parallel proceedings in the USA.
Exactly where generative AI companies choose to train their AI models, and how and where rightsholders choose to structure their formal legal proceedings in respect of the same, adds a further layer of complexity to legal claims.
What next for the UK?
As mentioned, the UK government’s report on its copyright and AI proposals is anticipated before spring next year (with an interim progress report promised before the end of the year). Depending on its contents, we may see rightsholders feeling less or more incentivised to tackle the legal challenges discussed above.
But, regardless of the initial standpoint taken in the report, more cases going to the heart of this debate will need to reach the UK courts, or changes to UK legislation will be required, if we are to understand in greater detail if and how these legal challenges can be overcome.
In the meantime, expect to see rightsholders continue to try to take matters into their own hands. We wrote an article for this publication in May 2024 regarding the formal partnerships struck by Reddit with OpenAI and Google, which permit those businesses to use Reddit content subject to agreed licensing terms. It has also been recently reported that new systems are being placed on the market which allow rightsholders to block AI bots from scraping online content without permission or compensation.
The need for legal clarity on this debate is only increasing, and what the UK government, legislature and courts do next will be vital in shaping the future for all concerned.
James Longster is a partner in Travers Smith’s Technology & Commercial Transactions Department, and Rosie Westley is a senior counsel in Travers Smith’s Technology & Commercial Transactions Department.