Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Catio wins ‘coolest tech’ award at VB Transform 2025

Meta is offering multimillion-dollar pay for AI researchers, but not $100M ‘signing bonuses’

How To Steal a Lost Election With Gerrymandering | Two Minute Papers #104

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Amazon (Titan)
    • Anthropic (Claude 3)
    • Cohere (Command R)
    • Google DeepMind (Gemini)
    • IBM (Watsonx)
    • Inflection AI (Pi)
    • Meta (LLaMA)
    • OpenAI (GPT-4 / GPT-4o)
    • Reka AI
    • xAI (Grok)
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Facebook X (Twitter) Instagram
Advanced AI News
Manufacturing AI

AI deepfakes protection or internet freedom threat?

Advanced AI EditorBy Advanced AI EditorJune 25, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Critics fear the revised NO FAKES Act has morphed from targeted AI deepfakes protection into sweeping censorship powers.

What began as a seemingly reasonable attempt to tackle AI-generated deepfakes has snowballed into something far more troubling, according to digital rights advocates. The much-discussed Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act – originally aimed at preventing unauthorised digital replicas of people – now threatens to fundamentally alter how the internet functions.

The bill’s expansion has set alarm bells ringing throughout the tech community. It’s gone well beyond simply protecting celebrities from fake videos to potentially creating a sweeping censorship framework.

From sensible safeguards to sledgehammer approach

The initial idea wasn’t entirely misguided: to create protections against AI systems generating fake videos of real people without permission. We’ve all seen those unsettling deepfakes circulating online.

But rather than crafting narrow, targeted measures, lawmakers have opted for what the Electronic Frontier Foundation calls a “federalised image-licensing system” that goes far beyond reasonable protections.

“The updated bill doubles down on that initial mistaken approach,” the EFF notes, “by mandating a whole new censorship infrastructure for that system, encompassing not just images but the products and services used to create them.”

What’s particularly worrying is the NO FAKES Act’s requirement for nearly every internet platform to implement systems that would not only remove content after receiving takedown notices but also prevent similar content from ever being uploaded again. Essentially, it’s forcing platforms to deploy content filters that have proven notoriously unreliable in other contexts.

Innovation-chilling

Perhaps most concerning for the AI sector is how the NO FAKES Act targets the tools themselves. The revised bill wouldn’t just go after harmful content; it would potentially shut down entire development platforms and software tools that could be used to create unauthorised images.

This approach feels reminiscent of trying to ban word processors because someone might use one to write defamatory content. The bill includes some limitations (e.g. tools must be “primarily designed” for making unauthorised replicas or have limited other commercial uses) but these distinctions are notoriously subject to interpretation.

Small UK startups venturing into AI image generation could find themselves caught in expensive legal battles based on flimsy allegations long before they have a chance to establish themselves. Meanwhile, tech giants with armies of lawyers can better weather such storms, potentially entrenching their dominance.

Anyone who’s dealt with YouTube’s ContentID system or similar copyright filtering tools knows how frustratingly imprecise they can be. These systems routinely flag legitimate content like musicians performing their own songs or creators using material under fair dealing provisions.

The NO FAKES Act would effectively mandate similar filtering systems across the internet. While it includes carve-outs for parody, satire, and commentary, enforcing these distinctions algorithmically has proven virtually impossible.

“These systems often flag things that are similar but not the same,” the EFF explains, “like two different people playing the same piece of public domain music.”

For smaller platforms without Google-scale resources, implementing such filters could prove prohibitively expensive. The likely outcome? Many would simply over-censor to avoid legal risk.

In fact, one might expect major tech companies to oppose such sweeping regulation. However, many have remained conspicuously quiet. Some industry observers suggest this isn’t coincidental—established giants can more easily absorb compliance costs that would crush smaller competitors.

“It is probably not a coincidence that some of these very giants are okay with this new version of NO FAKES,” the EFF notes.

This pattern repeats throughout tech regulation history—what appears to be regulation reigning in Big Tech often ends up cementing their market position by creating barriers too costly for newcomers to overcome.

NO FAKES Act threatens anonymous speech

Tucked away in the legislation is another troubling provision that could expose anonymous internet users based on mere allegations. The bill would allow anyone to obtain a subpoena from a court clerk – without judicial review or evidence – forcing services to reveal identifying information about users accused of creating unauthorised replicas.

History shows such mechanisms are ripe for abuse. Critics with valid points can be unmasked and potentially harassed when their commentary includes screenshots or quotes from the very people trying to silence them.

This vulnerability could have a profound effect on legitimate criticism and whistleblowing. Imagine exposing corporate misconduct only to have your identity revealed through a rubber-stamp subpoena process.

This push for additional regulation seems odd given that Congress recently passed the Take It Down Act, which already targets images involving intimate or sexual content. That legislation itself raised privacy concerns, particularly around monitoring encrypted communications.

Rather than assess the impacts of existing legislation, lawmakers seem determined to push forward with broader restrictions that could reshape internet governance for decades to come.

The coming weeks will prove critical as the NO FAKES Act moves through the legislative process. For anyone who values internet freedom, innovation, and balanced approaches to emerging technology challenges, this bears close watching indeed.

(Photo by Markus Spiske)

See also: The OpenAI Files: Ex-staff claim profit greed betraying AI safety

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleStartup backed by Dropbox and Figma debuts breakthrough tech that could solve one of the biggest AI problems — AMD’s BFF Lamini promises to cut hallucinations by 90% using mindmap-like process
Next Article MIT-AVT: Data Collection Device (for Large-Scale Semi-Autonomous Driving)
Advanced AI Editor
  • Website

Related Posts

Anthropic tests AI running a real business with bizarre results

June 27, 2025

Nvidia reclaims title of most valuable company on AI momentum

June 26, 2025

Major AI chatbots parrot CCP propaganda

June 26, 2025
Leave A Reply Cancel Reply

Latest Posts

How Labubu Dolls Became 2025’s Viral Fashion Trend

Why Is That Revealing Photograph of Lorde Going Viral?

8 Premium Sunglasses Brands To Celebrate National Sunglasses Day

‘Squid Game’ Star Lee Jung-Jae Talks Casting, Gi-Hun And Season 3

Latest Posts

Catio wins ‘coolest tech’ award at VB Transform 2025

June 28, 2025

Meta is offering multimillion-dollar pay for AI researchers, but not $100M ‘signing bonuses’

June 28, 2025

How To Steal a Lost Election With Gerrymandering | Two Minute Papers #104

June 28, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Catio wins ‘coolest tech’ award at VB Transform 2025
  • Meta is offering multimillion-dollar pay for AI researchers, but not $100M ‘signing bonuses’
  • How To Steal a Lost Election With Gerrymandering | Two Minute Papers #104
  • Foundations and Challenges of Deep Learning (Yoshua Bengio)
  • EU’s waffle on artificial intelligence law creates huge headache – POLITICO

Recent Comments

No comments to show.

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.