Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Texas Floods Are a Wake-Up Call for AI-Powered Forecasting and the NOAA Budget Cuts Don’t Help

Multi-tenant RAG implementation with Amazon Bedrock and Amazon OpenSearch Service for SaaS using JWT

VERIFIED OR FABRICATED? Risk Of AI-Generated Data For Investigative Journalists – The Whistler Newspaper

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Anthropic (Claude)

Scraping the surface of generative AI training disputes and their legal challenges

By Advanced AI EditorJuly 23, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Ongoing legal cases are setting precedent, but demand clarity

A review of the legal challenges associated with generative AI training disputes emphasises the need for clarity from the UK government, legislature and courts.

The need for greater legal clarity on how tech companies are able to use content in the training of generative AI models has been hotly debated (and challenged) for many years now.

In recent months, we have seen a string of examples of rightsholders seeking to challenge the training activities of generative AI companies. In the US, Reddit accused Anthropic of training its Claude AI chatbot using Reddit user comments (which Reddit says were scraped without permission). In the UK, the BBC accused Perplexity of training its Perplexity AI chatbot using BBC content (which the broadcaster also says was scraped without permission).

In June, Getty Images and Stability AI locked horns in the English High Court during a trial linked to Getty’s allegations that Stability AI has trained its Stable Diffusion model using images scraped without permission from Getty’s websites. Getty is bringing parallel proceedings against Stability AI in the US.

The UK Government’s copyright and AI consultation closed in February and, after the Data (Use and Access) Bill proceeded to Royal Assent last month without the inclusion of any copyright or AI transparency provisions, the UK government has promised to publish a report on its copyright and AI proposals by mid-March 2026 (with an interim progress report promised by mid-December).

Although these developments suggest that legal clarity may be drawing nearer, we are yet to see evidence that this is the case. If anything, recent developments have emphasised that the legal challenges facing those rightsholders navigating claims against generative AI companies are as significant now as they have ever been.

The challenge of identifying the appropriate legal basis (or bases) of a claim

Commentary often conflates the “rightsholders vs generative AI companies” debate with the “copyright vs AI” debate, when in fact the latter is only one (albeit the predominant) aspect of the former.

Although some claims brought against generative AI companies focus solely or primarily on allegations of copyright infringement (i.e. allegations of unauthorised copying of content during the scraping and ingestion stages of AI training), this is not the case for all.

While Getty has accused Stability AI of copyright infringement (including infringement by virtue of importing an infringing article into the UK), it has also raised accusations of other types of intellectual property infringement, such as trademark infringement. The BBC, in addition to accusing Perplexity of copyright infringement, has alleged that Perplexity’s actions constitute a breach of the BBC’s terms of use. Reddit’s US lawsuit against Anthropic focuses on multiple causes of action, none of which are centred around copyright infringement. Rather, the first cause of action, being breach of contract, alleges that Anthropic has scraped and subsequently used Reddit forum content in breach of Reddit’s online user agreement.

In some instances, a claimant may not be the owner of any copyright in the content being scraped from their website. In other instances, a copyright owner may see the complexity and uncertainty associated with copyright infringement claims in the context of AI training as a sufficient reason for focusing their efforts and resources on other non-copyright related bases of claim.

Clearly then, even identifying the appropriate legal bases of a claim can be far from straightforward.

The evidential challenge

In the UK, any claimant accusing a generative AI company of scraping and ingesting its content for AI training purposes without permission must substantiate its accusations with evidence, and significant amounts of it. This is easier said than done.

Obtaining sufficient technical data to prove that a particular AI company has scraped content from a website can be challenging. Often then, rightsholders look to the output generated by generative AI models for clues that might suggest that their content may have been used during the AI training process.

As an example, Getty has argued that Stable Diffusion’s output bearing the Getty Images watermark is evidence that Stable Diffusion has been trained using images scraped without permission. The BBC has stated that output generated by the Perplexity AI chatbot reproduces its content verbatim, while Reddit asserts that output generated by the Claude AI chatbot makes references to Reddit communities and topics in a way that could only be possible if trained on Reddit content.

A significant amount of time and effort can be required to collate evidence of sufficient quality and quantity. Given the nature and scale of generative AI, it is very difficult to prove that specific content has been ingested and used to create output responses to user prompts.

In the UK, this evidential burden could be eased if the legislature follows the EU’s lead and imposes transparency obligations on generative AI companies to publish a sufficiently detailed summary of the content used for AI training purposes. We wait to see whether legislative changes of this nature would make it through Parliament unscathed.

The jurisdictional challenge

Copyright laws (and, where enacted, AI laws) differ from one jurisdiction to the next. Consequently, identifying exactly where AI training activities (and therefore, any alleged infringing acts) have taken place is crucial to determining which territory’s laws will apply.

However, if proving that content has been scraped and ingested for AI training purposes sounds challenging, obtaining evidence that the training has taken place in a particular territory can be even harder.

This is the reality Getty has faced in its UK proceedings against Stability AI. During closing arguments, Getty dropped part of its copyright infringement claim due to issues proving that AI training had actually taken place in the UK (and therefore engaged applicable UK copyright laws). Consequently, on the topic of generative AI training activities, the focus now turns to the parallel proceedings in the USA.

Exactly where generative AI companies choose to train their AI models, and how and where rightsholders choose to structure their formal legal proceedings in respect of the same, adds a further layer of complexity to legal claims.

What next for the UK?

As mentioned, the UK government’s report on its copyright and AI proposals is anticipated before spring next year (with an interim progress report promised before the end of the year). Depending on its contents, we may see rightsholders feeling less or more incentivised to tackle the legal challenges discussed above.

But, regardless of the initial standpoint taken in the report, more cases going to the heart of this debate will need to reach the UK courts, or changes to UK legislation will be required, if we are to understand in greater detail if and how these legal challenges can be overcome.

In the meantime, expect to see rightsholders continue to try to take matters into their own hands. We wrote an article for this publication in May 2024 regarding the formal partnerships struck by Reddit with OpenAI and Google, which permit those businesses to use Reddit content subject to agreed licensing terms. It has also been recently reported that new systems are being placed on the market which allow rightsholders to block AI bots from scraping online content without permission or compensation.

The need for legal clarity on this debate is only increasing, and what the UK government, legislature and courts do next will be vital in shaping the future for all concerned.

James Longster is a partner in Travers Smith’s Technology & Commercial Transactions Department, and Rosie Westley is a senior counsel in Travers Smith’s Technology & Commercial Transactions Department.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMicrosoft expands AI team with key Google DeepMind hires
Next Article Qwen 3 vs Kimi K2 : AI Model Precision vs Versatility, Who Wins?
Advanced AI Editor
  • Website

Related Posts

Claude AI training leak reveals trusted and banned websites — here’s what it means for you

July 23, 2025

A Practical Guide For 2025

July 19, 2025

How Claude Financial Services AI Boosts Accuracy and Efficiency

July 18, 2025

Comments are closed.

Latest Posts

Winston Artory Merger Targets $15B Art Valuation Market

Barnes Foundation Online Learning Platform Expands to Penn Museum

Archaeologists Identify 5,500-Year-Old Megalithic Tombs in Poland

Phillips to Debut ‘First-of-its Kind’ Priority Bidding Structure

Latest Posts

Texas Floods Are a Wake-Up Call for AI-Powered Forecasting and the NOAA Budget Cuts Don’t Help

July 23, 2025

Multi-tenant RAG implementation with Amazon Bedrock and Amazon OpenSearch Service for SaaS using JWT

July 23, 2025

VERIFIED OR FABRICATED? Risk Of AI-Generated Data For Investigative Journalists – The Whistler Newspaper

July 23, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Texas Floods Are a Wake-Up Call for AI-Powered Forecasting and the NOAA Budget Cuts Don’t Help
  • Multi-tenant RAG implementation with Amazon Bedrock and Amazon OpenSearch Service for SaaS using JWT
  • VERIFIED OR FABRICATED? Risk Of AI-Generated Data For Investigative Journalists – The Whistler Newspaper
  • Qwen 3 vs GPT-4.1: How Alibaba’s AI is Changing the Game
  • Claude AI training leak reveals trusted and banned websites — here’s what it means for you

Recent Comments

  1. 1win app download on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  2. 📃 ✉️ Pending Deposit: 1.8 BTC from new sender. Review? > https://graph.org/REDEEM-BTC-07-23?hs=60194a6753699dfb5804798d5843ffd0& 📃 on This Neural Network Optimizes Itself | Two Minute Papers #212
  3. 📉 📩 Pending Deposit - 1.0 BTC from unknown sender. Review? => https://graph.org/REDEEM-BTC-07-23?hs=16ed4f83e039fc01f975372e66ec05d7& 📉 on OpenAI seeks to make its upcoming ‘open’ AI model best-in-class
  4. 📊 📩 Pending Transfer: 1.8 BTC from unknown sender. Approve? >> https://graph.org/REDEEM-BTC-07-23?hs=8f64f5846f6d90e5a1ebb4bba272bbea& 📊 on Nvidia’s GB200 NVL72 Supercomputer Achieves 2.7× Faster Inference on DeepSeek V2
  5. 📅 ✉️ New Deposit: 1.8 BTC from new sender. Approve? > https://graph.org/REDEEM-BTC-07-23?hs=5719fe560af3b8c36c0a0976ea7a6f6b& 📅 on Meta, Booz Allen develop ‘Space Llama’ AI system for the International Space Station

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.