Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Foundation AI: Cisco launches AI model for integration in security applications

A New Trick Could Block the Misuse of Open Source AI

I’m a Celebrity Stylist Who Dresses Stars Like Cardi B and Lil Nas X

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Senator’s RISE Act would require AI developers to list training data, evaluation methods in exchange for ‘safe harbor’ from lawsuits
VentureBeat AI

Senator’s RISE Act would require AI developers to list training data, evaluation methods in exchange for ‘safe harbor’ from lawsuits

Advanced AI BotBy Advanced AI BotJune 13, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

Amid an increasingly tense and destabilizing week for international news, it should not escape any technical decision-makers’ notice that some lawmakers in the U.S. Congress are still moving forward with new proposed AI regulations that could reshape the industry in powerful ways — and seek to steady it moving forward.

Case in point, yesterday, U.S. Republican Senator Cynthia Lummis of Wyoming introduced the Responsible Innovation and Safe Expertise Act of 2025 (RISE), the first stand-alone bill that pairs a conditional liability shield for AI developers with a transparency mandate on model training and specifications.

As with all new proposed legislation, both the U.S. Senate and House would need to vote in the majority to pass the bill and U.S. President Donald J. Trump would need to sign it before it becomes law, a process which would likely take months at the soonest.

“Bottom line: If we want America to lead and prosper in AI, we can’t let labs write the rules in the shadows,” wrote Lummis on her account on X when announcing the new bill. We need public, enforceable standards that balance innovation with trust. That’s what the RISE Act delivers. Let’s get it done.”

It also upholds traditional malpractice standards for doctors, lawyers, engineers, and other “learned professionals.”

If enacted as written, the measure would take effect December 1 2025 and apply only to conduct that occurs after that date.

Why Lummis says new AI legislation is necessary

The bill’s findings section paints a landscape of rapid AI adoption colliding with a patchwork of liability rules that chills investment and leaves professionals unsure where responsibility lies.

Lummis frames her answer as simple reciprocity: developers must be transparent, professionals must exercise judgment, and neither side should be punished for honest mistakes once both duties are met.

In a statement on her website, Lummis calls the measure “predictable standards that encourage safer AI development while preserving professional autonomy.”

With bipartisan concern mounting over opaque AI systems, RISE gives Congress a concrete template: transparency as the price of limited liability. Industry lobbyists may press for broader redaction rights, while public-interest groups could push for shorter disclosure windows or stricter opt-out limits. Professional associations, meanwhile, will scrutinize how the new documents can fit into existing standards of care.

Whatever shape the final legislation takes, one principle is now firmly on the table: in high-stakes professions, AI cannot remain a black box. And if the Lummis bill becomes law, developers who want legal peace will have to open that box—at least far enough for the people using their tools to see what is inside.

How the new ‘Safe Harbor’ provision for AI developers shielding them from lawsuits works

RISE offers immunity from civil suits only when a developer meets clear disclosure rules:

Model card – A public technical brief that lays out training data, evaluation methods, performance metrics, intended uses, and limitations.

Model specification – The full system prompt and other instructions that shape model behavior, with any trade-secret redactions justified in writing.

The developer must also publish known failure modes, keep all documentation current, and push updates within 30 days of a version change or newly discovered flaw. Miss the deadline—or act recklessly—and the shield disappears.

Professionals like doctors, lawyers remain ultimately liable for using AI in their practices

The bill does not alter existing duties of care.

The physician who misreads an AI-generated treatment plan or a lawyer who files an AI-written brief without vetting it remains liable to clients.

The safe harbor is unavailable for non-professional use, fraud, or knowing misrepresentation, and it expressly preserves any other immunities already on the books.

Reaction from AI 2027 project co-author

Daniel Kokotajlo, policy lead at the nonprofit AI Futures Project and a co-author of the widely circulated scenario planning document AI 2027, took to his X account to state that his team advised Lummis’s office during drafting and “tentatively endorse[s]” the result. He applauds the bill for nudging transparency yet flags three reservations:

Opt-out loophole. A company can simply accept liability and keep its specifications secret, limiting transparency gains in the riskiest scenarios.

Delay window. Thirty days between a release and required disclosure could be too long during a crisis.

Redaction risk. Firms might over-redact under the guise of protecting intellectual property; Kokotajlo suggests forcing companies to explain why each blackout truly serves the public interest.

The AI Futures Project views RISE as a step forward but not the final word on AI openness.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleNVIDIA’s AI Removes Objects From Your Photos! ❌
Next Article AI disruption rises, VC optimism cools in H1 2025
Advanced AI Bot
  • Website

Related Posts

The case for embedding audit trails in AI systems before scaling

June 14, 2025

Beyond GPT architecture: Why Google’s Diffusion approach could reshape LLM deployment

June 14, 2025

Just add humans: Oxford medical study underscores the missing link in chatbot testing

June 14, 2025
Leave A Reply Cancel Reply

Latest Posts

Ringo Starr Rocks N.Y.C.’s Radio City With A Little Help From His Friends

Charles Sandison Illuminates The Oracle With AI

Live Nation’s Russell Wallach On The LN Partnership With Airbnb

Tehran Galleries React to Israeli Missile Attack

Latest Posts

Foundation AI: Cisco launches AI model for integration in security applications

June 14, 2025

A New Trick Could Block the Misuse of Open Source AI

June 14, 2025

I’m a Celebrity Stylist Who Dresses Stars Like Cardi B and Lil Nas X

June 14, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.