
By Rebecca Steer, Partner, Charles Russell Speechlys.
The UK Government’s recent consultation on copyright and artificial intelligence marks a pivotal moment in the law, technology, and creative industries. With over 11,500 responses received and no legislative action yet taken, the landscape remains uncertain, but the stakes are high. For AI developers, creators, and rights holders alike, the outcomes of this consultation could reshape not only the UK’s approach to intellectual property, but its place in the global digital economy.
Training AI models: copyright infringement or fair use?
At the heart of the debate is the question of whether training AI models on copyrighted material without permission constitutes infringement. Under the UK’s Copyright, Designs and Patents Act 1988 (CDPA), any reproduction or adaptation of a “substantial part” of a work without a licence is unlawful. There are exemptions to this for specific use cases, but typically these are limited to research and non-commercial activities. It is widely understood that AI large language models are trained on a massive data set of text which is likely to have been taken from the internet, books and articles etc. This data set trains the models on learning the structure and patterns of human language. If the use of these source materials (internet web pages, books and articles) was appropriately licensed for this use then there is unlikely to be any copyright infringement. If however, these were not appropriately licensed and if there is no applicable exemption; then copyright infringement could arise. Relevant exemptions could be the exemption under the CDPA for text and data mining which applies where there is lawful access to copyright works but only if the use is for non-commercial research.
While outputs generated by AI may not always reproduce a ‘substantial part’ of a work there are grey areas – for example if a prompt requests a ‘look alike’ image (such as prompt to create an image of an athlete which looks like the runner, Keely Hodgkinson) this is likely to involve the use of source images of Keely Hodgkinson. If the source images of the runner were not appropriately licensed for use to train the model and don’t fall into one of the exemptions, then it’s easy to see how AI can veer into infringing territory.
For developers, the legal risk is significant. Without transparency in training data, they may inadvertently build models that reproduce protected content. For rights holders, identifying and proving infringement is nearly impossible given the opacity and scale of the datasets used.
Opt-out mechanisms and rights reservation
One of the more controversial elements of the consultation is the idea of a rights reservation system whereby creators must proactively opt out if they do not want their works used for text and data mining (TDM). This reverses the traditional presumption that consent must be granted, effectively placing the burden on rights holders.
This proposed solution, akin to the EU’s approach under the Digital Copyright Directive, requires machine-readable opt-outs. However, critics, particularly in the creative industries, argue that the mechanisms are unworkable and fail to guarantee compensation. Without a clear, enforceable, and retroactive opt-out system, the risk is that creators will be excluded from the value chain of generative AI altogether.
Who owns AI outputs? The originality conundrum
The consultation also examines the nature of AI-generated outputs and whether they should be protected by copyright. Under current UK law, literary, dramatic, musical, and artistic works require originality which has traditionally been understood as human input. Yet the CDPA also recognises computer-generated works (CGWs), assigning authorship to the person who made the “arrangements necessary” (i.e., the user prompting the AI). The consultation seeks to resolve the conflict here to ensure the law is clear on ownership of CGWs, including prompts and outputs.
The global context: staying competitive without breaking the rules
The UK is not legislating in a vacuum. Any changes to its copyright regime must remain compliant with international agreements, particularly the Berne Convention, which mandates that copyright protections be afforded to works irrespective of their form or medium of creation. This means that any relaxation of copyright protections to benefit AI developers must not erode the fundamental rights of authors as enshrined under international law.
At the same time, the UK Government is acutely aware of the global AI arms race and the pressing need to position the country as a competitive destination for AI research, development, and investment. Its AI Opportunities Action Plan articulates this ambition, highlighting the transformative potential of AI for the UK economy from job creation to public service innovation.
Other jurisdictions are moving fast, but not uniformly. The European Union has opted for a regulatory-first approach. The EU Digital Copyright Directive introduces nuanced exemptions for text and data mining allowing for scraping and mining of data for scientific research where access is lawful, and for commercial purposes only when rights holders have not opted out via a machine-readable reservation.
Meanwhile, the United States, where fair use doctrine is more flexible, has become a relatively permissive environment for AI model training though legal uncertainty remains, as evidenced by ongoing litigation such as the Getty Images case. This disparity in approaches has a direct commercial impact; AI companies may be incentivised to base operations in jurisdictions with more developer-friendly copyright regimes, creating a competitive disadvantage for stricter markets.
The UK’s challenge is a tightrope walk: remain an attractive jurisdiction for AI innovation while upholding the rights and economic viability of its creative industries. The options on the table reflect this balancing act:
Option 1: Licensing Requirement – This would mandate that all AI model training on copyrighted works must be licensed. While it clearly protects creators and establishes a revenue mechanism, it risks driving AI companies abroad, particularly startups and SMEs that cannot afford complex licensing negotiations.
Option 2: Broad Data Mining Exemption – This would offer AI developers maximum flexibility by allowing TDM across all copyrighted content without requiring permission. Though innovation-friendly, this approach could severely undercut the creative economy, breach the UK’s international obligations, and provoke backlash from rights holders both domestically and internationally.
Option 3: TDM with Rights Reservation – Inspired by the EU model, this hybrid approach seeks to allow data mining unless rights holders have explicitly opted out in a machine-readable format. While attractive in theory, its practical implementation is uncertain. Key questions remain unanswered. How will opt-outs be technically implemented across legacy and new works? Will there be a centralised registry or standardised metadata protocol? How will these be enforced across jurisdictions?
There are also concerns about the asymmetry of power: large platforms and AI labs may have the resources to develop or respond to such systems, but independent creators may struggle to protect their rights or ensure fair remuneration. Furthermore, aligning too closely with the EU without credible enforcement apparatus could leave the UK in a regulatory limbo without either the clarity of US-style fair use or the full infrastructure of EU rights reservation mechanisms.
A delicate balance
Ultimately, the UK’s position on copyright and AI must strike a delicate balance. It must create legal certainty for AI developers and international investors, while also protecting the rights of its creative communities, many of whom are still reeling from the disruptions of the digital age. The global context is evolving quickly, and the UK’s eventual policy will not only shape domestic innovation but influence its standing as a global leader in both AI and cultural production.
–

About the author: Rebecca Steer is a Partner at Charles Russell Speechlys. She is a specialist commercial, IP, media and tech lawyer, advising her clients across a full spectrum of legal issues.