Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

Foundation AI: Cisco launches AI model for integration in security applications

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Protect the EU AI Act
Future of Life Institute

Protect the EU AI Act

Advanced AI BotBy Advanced AI BotNovember 22, 2023No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


As the White House takes steps to target powerful foundation models and the UK convenes experts to research their potential risks, Germany, France, and Italy have proposed exempting foundation models from regulation entirely. This is presumably to protect European companies like Aleph Alpha and Mistral AI from what they proclaim is overregulation. This approach is problematic for several reasons.

AI is not like other products

Firstly, the argument that no other product is regulated at the model level – rather than the user-facing system level – is unconvincing. Companies such as OpenAI charge for access to their models and very much treat them as products. What’s more, few other products have the capabilities to provide people with malware-making, weapon-building, or pathogen-propagating instructions; this merits regulation.

General-purpose AI has been compared to a hammer because nothing in the design of the hammer can prevent users from harming others with it. Arguing on similar grounds, gun rights advocates contend that ‘guns don’t kill people, people kill people’. People are indeed flawed. They’re an essential contributor to any harm caused. However, regulatory restrictions on the original development and further distribution of any technology can reduce its destructive capacity and fatality regardless of its use, even if it falls into the wrong hands.

Downstream AI system developers and deployers will need to conduct use-case-specific risk mitigation. However, data and design choices made at the model level fundamentally shape safety and performance throughout the lifecycle. Application developers can reduce the risk of factual mistakes, but if the underlying model was more accurate and robust, then its subsequent applications would be significantly more reliable and trustworthy. If the initial training data contains inherent biases, this will increase discriminatory outputs irrespective of what product developers do. 

As the bedrock of the AI revolution, it’s reasonable that foundation model providers – seeking to supply their models to others for commercial benefit – should govern their training data and test their systems for cybersecurity, interpretability and predictability, which simply cannot be implemented at the system level alone. Mandatory internal and external model-level testing, like red teaming, is essential to verify capabilities and limitations to determine if the model is suitable for supply in the Single Market. 

As a single failure point, flaws in foundation models will have far-reaching consequences across society that will be impossible to trace and mitigate if the burden is dumped on downstream system providers. Disproportionately burdening application developers does not incentivise foundation model providers to design adequate safety controls safety controls and the European Digital SME Alliance has rightfully raised this point on behalf of 45,000 enterprises. Without hard law, major providers will kick the can down the road to those with inevitably and invariably less knowledge of the underlying capabilities and risks of the model.

Codes of conduct are non-enforcing

Secondly, codes of conduct, the favoured option of those advocating for foundation models to be out of the scope of AI rules, are mere guidelines, lacking legal force to compel companies to act in the broader public interest.

Even if adopted, codes can be selectively interpreted by companies, cherry-picking the rules they prefer, while causing fragmentation and insufficient consumer protection across the Union. As these models will be foundational to innumerable downstream applications across the economy and society, codes of conduct will do nothing to increase trust, or uptake, of beneficial and innovative AI. 

Codes of conduct offer no clear means for detecting and remedying infringements. This creates a culture of complacency among foundation model developers, as well as increased uncertainty for developers building on top of their models. Amid growing concentration, and diminishing consumer choice, why should they care if there’s ultimately no consequence for any wrongdoing? Both users and downstream developers alike will be unable to avoid their products anyway, much like large digital platforms.  

The voluntary nature of codes allows companies to simply ignore them. The European Commission was predictably powerless to prevent X (formerly Twitter) from exiting the Code of Practice on Disinformation. Self-regulation outsources democratic decisions to private power, whose voluntary – not mandatory – compliance alone cannot protect the public.

Model cards bring nothing new to the table  

Finally, the suggested model cards, introduced by Google researchers in 2019, are not a new concept and are already widely used in the market. Adding them into the AI Act as a solution to advanced AI does not change anything. One significant limitation of AI model cards lies in their subjective nature, as they rely on developers’ own assessments without third-party assurance. While model cards can provide information about training data, they cannot substitute thorough model testing and validation by independent experts. Simply documenting potential biases within a self-regulatory framework does not effectively mitigate them.

In this context, the European Parliament’s proposed technical documentation, expected to be derived from foundation model providers, is a comprehensive solution. The Parliament mandates many more details than model cards, including the provider’s name, contact information, trade name, data sources, model capabilities and limitations, foreseeable risks, mitigation measures, training resources, model performance on benchmarks, testing and optimisation results, market presence in Member States, and an optional URL. This approach ensures thorough and standardised disclosures, ameliorating fragmentation while fostering transparency and accountability.

Protect the EU AI Act from irrelevance 

Exempting foundation models from regulation is a dangerous misstep. No other product can autonomously deceive users. Controls begin upstream, not downstream. Voluntary codes of conduct and model cards are weak substitutes for mandatory regulation, and risk rendering the AI Act a paper tiger. Sacrificing the AI Act’s ambition of safeguarding 450 million people from well-known AI hazards to ensure trust and uptake would upset its original equilibrium – especially considering existing proposals which effectively balance innovation and safety. Despite pioneering AI regulation internationally, Europe now risks lagging behind the US, which could set global safety standards through American norms on the frontier of this emerging and disruptive technology.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMiles Apart: Comparing key AI Act proposals
Next Article Exploration of secure hardware solutions for safe AI deployment
Advanced AI Bot
  • Website

Related Posts

Are we close to an intelligence explosion?

March 21, 2025

The Impact of AI in Education: Navigating the Imminent Future

February 13, 2025

A Buddhist Perspective on AI: Cultivating freedom of attention and true diversity in an AI future

January 20, 2025
Leave A Reply Cancel Reply

Latest Posts

Why Hollywood Stars Make Bank On Broadway—For Producers

New contemporary art museum to open in Slovenia

Curtain Up On 85 Years Of American Ballet Theatre

Is Quiet Luxury Over? Top Designer André Fu Believes It’s Here To Stay

Latest Posts

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

June 5, 2025

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

June 5, 2025

Foundation AI: Cisco launches AI model for integration in security applications

June 5, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.