Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

C3.ai Stock Dips Following Palantir Technologies Earnings: What’s Going On? – C3.ai (NYSE:AI)

China’s Tech Giants Stockpile Nvidia H20 AI Chips Ahead of U.S. Export Ban

AI-generated images are a legal mess – and still a very human process

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Creators Are Losing the AI Copyright Battle. We Have to Keep Fighting
Stability AI

Creators Are Losing the AI Copyright Battle. We Have to Keep Fighting

Advanced AI BotBy Advanced AI BotMay 8, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


The struggle between AI companies and creatives around “training data” — or what you and I would refer to as people’s life’s work — may be the defining struggle of this generation for the media industries. AI companies want to exploit creators’ work without paying them, using it to train AI models that compete with those creators; creators and rights holders are doing everything they can to stop them.

In late 2023, I quit my job at Stability AI because of disagreements over this issue, and I’ve been campaigning for fairer treatment of creators by AI companies ever since. Creators have always been the underdog in this fight. But recently, for the first time, a clear path to us losing has emerged. I don’t want to sound defeatist — I think there’s still a real chance that the world settles on a fair balance between AI companies’ and creators’ interests. But the odds are not in our favor, and I think it’s important we’re open about this.

In many ways, we are winning. The copyright-related lawsuits against AI companies are stacking up — 40 in the US alone at last count — with more all but inevitable. The first of these to be resolved went the way of the rights holders, in Thomson Reuters vs. Ross Intelligence. Yes, the defendant had already gone bankrupt, and there are elements of the case that set it apart from some of the other lawsuits. But the judge’s ruling made it very clear that the competitive effect of Ross’ use of Thomson Reuters’ works was a decisive factor in Ross’ defeat. Consider that a similar competitive effect exists in many of the ongoing lawsuits – AI companies harm the market for the work they train on – and you will see why the creative industries were so encouraged by this outcome.

We’re also winning in the court of public opinion. Every poll that’s asked the general public whether AI companies should be allowed to train on copyrighted works without permission shows the majority siding  with rights holders. These companies spend fortunes developing their tech, why shouldn’t they have to  pay for the content too, particularly given that it’s arguably the most important piece of the puzzle?

So why do I say we’re losing? Put simply, because there is a risk that governments change copyright law to favor AI companies. And if that happens, the fight is lost.

The U.K. is where this is most explicitly being considered. In late December, the government announced a consultation on AI & copyright. This was not some neutral consultation: even before anyone had responded, they announced they had a ‘preferred option’, which would give AI companies access to any copyrighted work that rights holders hadn’t explicitly opted out via some as-yet-nonexistent rights reservation scheme. In other words — since it’s known that, when you run opt-out schemes like this, most people miss the chance to opt out — they want to turn copyright law on its head, and hand most of the U.K.’s creative output to AI companies for free.

British creators were, and still are, livid about this. Stars from Paul McCartney to Elton John to Dua Lipa to Barbara Broccoli came out vocally against the plans. I organised a protest album involving more than 1,000 British musicians. Every major newspaper in the country ran the same front page in protest.

But there are worrying signs the government may ignore these calls for fairness. Peter Kyle, the U.K.’s Secretary of State for Science, Innovation and Technology, recently dismissed the protests as coming from people who “resist change”. No matter that the country’s artists are united, the government has its fingers in its ears.

The tech lobby in the U.S. has its sights set on similar legislation. In OpenAI’s recommendation for the White House’s upcoming AI Action Plan, the company argued that AI’s ability to train on copyrighted material should be ‘preserved’. Of course, this is spin — rights holders strongly disagree that any such right currently exists, as the lawsuits attest. What they are really shooting for is new legislation that grants them this right. They are no doubt hoping that the many technologists who now have the ear of the president — and who have huge vested interests in AI avoiding legal hurdles — can help make their dream of immunity reality.

To make matters worse, countries are second-guessing each other. The U.K. government wants to compete with California, so suspecting that the U.S. might let AI companies off the hook, it preemptively copies them, ignoring the fact that many people think Californian AI companies are guilty of vast copyright infringement.

If AI companies get the right to use the life’s work of the world’s creators for free, they’ll do so by stoking governments’ fears of losing the AI race. There are senior people in governments around the world who believe that artificial general intelligence is just around the corner. They want it to be built in their country, and they are buying the lie that the route there is legalizing the theft of creative work.

This is why we are losing. But we haven’t lost yet, far from it. Now is the time to organize. Creators must get serious about the very real threat that governments will take their work and give it to AI companies for nothing. Writing more joint letters isn’t enough. If we organize, if we make our voices impossible to ignore, we can still stop this large-scale theft of humanity’s creativity. After all, democratic governments have a duty to listen to their people — and the people are clear that this theft cannot stand.

Ed Newton-Rex is the founder of Fairly Trained, a non-profit focused on ensuring generative AI companies ethically source the content they use to train their AI models. He previously served as the head of audio at artificial intelligence company Stability AI.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMistral AI adds Medium 3 to its family of models, claiming low cost and high performance
Next Article Paper page – Benchmarking LLMs’ Swarm intelligence
Advanced AI Bot
  • Website

Related Posts

AI-generated images are a legal mess – and still a very human process

May 8, 2025

AI-generated images are a legal mess – and still a very human process

May 8, 2025

James Cameron Wants To Use AI To “Cut The Cost” Of Filmmaking

May 8, 2025
Leave A Reply Cancel Reply

Latest Posts

Beyond ‘Love,’ The Enduring Legacy Of Robert Indiana Resonates Deeply Through Pace Gallery Representation

Ancient Greek Author and Title of Charred Herculaneum Scroll Revealed

Bonhams To Auction Museum Quality Work from The Holly Solomon Collection.

Justin Bateman Turns Stones Into Ephemeral Art

Latest Posts

C3.ai Stock Dips Following Palantir Technologies Earnings: What’s Going On? – C3.ai (NYSE:AI)

May 8, 2025

China’s Tech Giants Stockpile Nvidia H20 AI Chips Ahead of U.S. Export Ban

May 8, 2025

AI-generated images are a legal mess – and still a very human process

May 8, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.