Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Accelerate enterprise AI implementations with Amazon Q Business

Chan Zuckerberg Initiative’s rBio uses virtual cells to train AI, bypassing lab work

OpenAI lawyers question Meta’s role in Elon Musk’s $97B takeover bid 

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Stability AI

Getty Pivots in UK Lawsuit Against Stability AI, Shifting the Copyright Battleground

By Advanced AI EditorJune 26, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


In a calculated pivot that reshapes one of the tech world’s most significant legal battles about AI training, Getty Images has dropped its primary copyright infringement claims against Stability AI in London’s High Court. The move dramatically narrows the scope of the landmark UK lawsuit, steering the case away from a direct challenge to the legality of AI training itself and toward more nuanced questions of trademark and secondary copyright infringement.

This tactical shift does not end the confrontation but rather reframes it. Initially positioned as a “day of reckoning” for AI developers, the lawsuit will no longer focus on whether Stability AI’s training of its Stable Diffusion model on millions of Getty’s images was inherently illegal. The new development signals a potential recalibration of strategy in the broader war between content creators and AI firms, coming just a day after a U.S. judge delivered a seismic ruling in a similar dispute involving the AI company Anthropic. In response to the change, a spokesperson for Stability AI said the company was pleased with Getty’s decision to drop multiple claims.

While the core training and output claims have been withdrawn, the fight continues on two key fronts. Getty is pursuing a secondary infringement claim, which posits that the AI model itself is an “infringing article” illegally imported into the UK. The second front is a trademark claim centered on the appearance of Getty’s iconic watermark on some AI-generated images. Meanwhile, Getty’s parallel and far larger lawsuit in the United States, which seeks up to $1.7 billion in damages, remains completely unaffected.

A Strategic Retreat or a Sharpened Legal Spear?

When the trial began, is was dominated by a confrontational tone, with Getty’s lawyers arguing for the “straightforward enforcement of intellectual property rights.” The decision to now abandon those central claims represents a stark departure. According to Getty’s closing arguments, this was a “pragmatic decision” made after reviewing witness and expert testimony that it said was lacking from Stability AI.

Legal experts, however, suggest the move may reflect the immense difficulty of winning on the primary copyright claims under current UK law. Getty likely faced challenges in establishing a sufficient link between the AI training acts and UK jurisdiction. The focus now shifts to the secondary infringement theory, which has the widest relevance for AI companies that train their models outside the UK.

For its part, Stability AI has argued the trademark claims will fail because consumers do not interpret the watermarks as a commercial message from the company. The abrupt narrowing of the case has left some observers wanting more. The new development will likely frustrate those on both sides of the debate, who were hoping that the outcome of the trial might bring some clarity to the very issues which have now been dropped.

The Anthropic Precedent: A Bright Line Between Training and Theft

As the Getty case pivots in London, a landmark decision for Anthropic in a California federal court is sending shockwaves through the industry by drawing a sharp new line in the sand. In a summary judgment order, Judge William Alsup ruled that the act of training an AI model on copyrighted books constitutes a “transformative” fair use, a major victory for AI developers.

However, that victory came with a monumental catch: the judge ruled that this protection does not extend to the methods used to acquire the training data. The court found that Anthropic must face a high-stakes trial for building its dataset from pirated online libraries. Internal communications revealed that company executives preferred using pirated books to avoid the legal/practice/business slog of licensing.

The judge was unsparing in his assessment of that logic: “That rationale cannot be squared with the Copyright Act.” This creates a crucial legal distinction between the application of AI and the acquisition of data. As Judge Alsup declared, “We will have a trial on the pirated copies used to create Anthropic’s central library and the resulting damages,”

This split decision was met with fierce opposition from creator groups. In a response from The Authors Guild, the organization argued the ruling “contradicts established copyright precedent” and “ignores the harm caused to authors” from market saturation by AI-generated content that directly competes with their work.

A Widening Copyright War on Multiple Fronts

The Getty and Anthropic cases are key fronts in a global conflict that now spans nearly every creative industry. The legal theories being tested are setting precedents for disputes involving authors, artists, and musicians. In one such example, a now settled lawsuit filed by major music publishers alleged that Anthropic unlawfully used copyrighted song lyrics to train its Claude AI.

This complex legal environment highlights the dual-track strategy many content holders are adopting. Getty Images itself is not opposed to artificial intelligence; in fact, it has launched its own generative AI offering that was trained exclusively on its own licensed content and compensates the contributing artists. This approach frames its legal fight not as a Luddite rejection of technology, but as a battle for control and compensation. In  2023, the company asserted its belief that Stability AI “chose to ignore viable licensing options and long-standing legal protections in pursuit of their stand-alone commercial interests.”

The recent change in the case suggest the central question in the AI copyright wars is evolving. The industry is moving past the broad debate over whether AI training is fair use and into a more granular, and perhaps more perilous, examination of the data supply chain. The era of “scrape first, ask questions later” appears to be definitively over. For AI companies, proving clean data lineage is no longer a matter of ethics but of immense legal and financial liability, marking a new and decisive battleground in the fight to define the future of creativity.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleFederal workers rarely seek DeepSeek, but it’s happened
Next Article Google begins rolling out AI search in YouTube
Advanced AI Editor
  • Website

Related Posts

Stability AI introduces Stable Video 4D, its new AI model for 3D video generation

August 21, 2025

Tripo, the Frontrunner of 3D AI Boom, Supercharges New Era in Content Creation with 3.0 Upgrade

August 21, 2025

AI Isn’t Coming for Hollywood. It’s Already Arrived

August 21, 2025
Leave A Reply

Latest Posts

French Art Historian Trying to Block Bayeux Tapestry’s Move to London

Czech Man Sues Christie’s For Information on Nazi-Looted Artworks

Tanya Bonakdar Gallery to Close Los Angeles Space

Ancient Silver Coins Suggest New History of Trading in Southeast Asia

Latest Posts

Accelerate enterprise AI implementations with Amazon Q Business

August 22, 2025

Chan Zuckerberg Initiative’s rBio uses virtual cells to train AI, bypassing lab work

August 22, 2025

OpenAI lawyers question Meta’s role in Elon Musk’s $97B takeover bid 

August 22, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Accelerate enterprise AI implementations with Amazon Q Business
  • Chan Zuckerberg Initiative’s rBio uses virtual cells to train AI, bypassing lab work
  • OpenAI lawyers question Meta’s role in Elon Musk’s $97B takeover bid 
  • Google Veo3 Flow Tool Surpasses 100 Million AI-Generated Videos: Major Milestone for AI Video Creation | AI News Detail
  • Inline code nodes now supported in Amazon Bedrock Flows in public preview

Recent Comments

  1. Grovervot on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. gamesnohu.com/websie Tang tru chát cẩm on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. Grovervot on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. Williamcrosy on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Grovervot on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.