Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Dedicated mobile apps for vibe coding have so far failed to gain traction

GeoPQA: Bridging the Visual Perception Gap in MLLMs for Geometric Reasoning – Takara TLDR

OpenAI, Oracle, and SoftBank announced five new AI data centers as part of Stargate.

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
TechCrunch AI

Scott Wiener on his fight to make Big Tech disclose AI’s dangers

By Advanced AI EditorSeptember 23, 2025No Comments9 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


This is not California state senator Scott Wiener’s first attempt at addressing the dangers of AI.

In 2024, Silicon Valley mounted a fierce campaign against his controversial AI safety bill, SB 1047, which would have made tech companies liable for the potential harms of their AI systems. Tech leaders warned that it would stifle America’s AI boom. Governor Gavin Newsom ultimately vetoed the bill, echoing similar concerns, and a popular AI hacker house promptly threw an “SB 1047 Veto Party.” One attendee told me, “Thank god, AI is still legal.”

Now Wiener has returned with a new AI safety bill, SB 53, which sits on Governor Newsom’s desk awaiting his signature or veto sometime in the next few weeks. This time around, the bill is much more popular or at least, Silicon Valley doesn’t seem to be at war with it.

Anthropic outright endorsed SB 53 earlier this month. Meta spokesperson Jim Cullinan tells TechCrunch that the company supports AI regulation that balances guardrails with innovation and says, “SB 53 is a step in that direction,” though there are areas for improvement.

Former White House AI policy adviser Dean Ball tells TechCrunch that SB 53 is a “victory for reasonable voices,” and thinks there’s a strong chance Governor Newsom signs it.

If signed, SB 53 would impose some of the nation’s first safety reporting requirements on AI giants like OpenAI, Anthropic, xAI, and Google — companies that today face no obligation to reveal how they test their AI systems. Many AI labs voluntarily publish safety reports explaining how their AI models could be used to create bioweapons and other dangers, but they do this at will and they’re not always consistent.

The bill requires leading AI labs — specifically those making more than $500 million in revenue — to publish safety reports for their most capable AI models. Much like SB 1047, the bill specifically focuses on the worst kinds of AI risks: their ability to contribute to human deaths, cyberattacks, and chemical weapons. Governor Newsom is considering several other bills that address other types of AI risks, such as engagement-optimization techniques in AI companions.

Techcrunch event

San Francisco
|
October 27-29, 2025

SB 53 also creates protected channels for employees working at AI labs to report safety concerns to government officials, and establishes a state-operated cloud computing cluster, CalCompute, to provide AI research resources beyond the Big Tech companies.

One reason SB 53 may be more popular than SB 1047 is that it’s less severe. SB 1047 also would have made AI companies liable for any harms caused by their AI models, whereas SB 53 focuses more on requiring self-reporting and transparency. SB 53 also narrowly applies to the world’s largest tech companies, rather than startups.

But many in the tech industry still believe states should leave AI regulation up to the federal government. In a recent letter to Governor Newsom, OpenAI argued that AI labs should only have to comply with federal standards — which is a funny thing to say to a state governor. Venture firm Andreessen Horowitz wrote a recent blog post vaguely suggesting that some bills in California could violate the Constitution’s dormant Commerce Clause, which prohibits states from unfairly limiting interstate commerce.

Senator Wiener addresses these concerns: He lacks faith in the federal government to pass meaningful AI safety regulation, so states need to step up. In fact, Wiener thinks the Trump administration has been captured by the tech industry and that recent federal efforts to block all state AI laws are a form of Trump “rewarding his funders.”

The Trump administration has made a notable shift away from the Biden administration’s focus on AI safety, replacing it with an emphasis on growth. Shortly after taking office, Vice President J.D. Vance appeared at an AI conference in Paris and said: “I’m not here this morning to talk about AI safety, which was the title of the conference a couple of years ago. I’m here to talk about AI opportunity.”

Silicon Valley has applauded this shift, exemplified by Trump’s AI Action Plan, which removed barriers to building out the infrastructure needed to train and serve AI models. Today, Big Tech CEOs are regularly seen dining at the White House or announcing hundred-billion-dollar data centers alongside President Trump.

Senator Wiener thinks it’s critical for California to lead the nation on AI safety, but without choking off innovation.

I recently interviewed Senator Wiener to discuss his years at the negotiating table with Silicon Valley and why he’s so focused on AI safety bills. Our conversation has been edited lightly for clarity and brevity.

Senator Wiener, I interviewed you when SB 1047 was sitting on Governor Newsom’s desk. Talk to me about the journey you’ve been on to regulate AI safety in the last few years.

It’s been a roller coaster, an incredible learning experience, and just really rewarding. We’ve been able to help elevate this issue [of AI safety], not just in California, but in the national and international discourse.

We have this incredibly powerful new technology that is changing the world. How do we make sure it benefits humanity in a way where we reduce the risk? How do we promote innovation, while also being very mindful of public health and public safety. It’s an important — and in some ways, existential — conversation about the future. SB 1047, and now SB 53, have helped to foster that conversation about safe innovation.

In the last 20 years of technology, what have you learned about the importance of laws that can hold Silicon Valley to account?

I’m the guy who represents San Francisco, the beating heart of AI innovation. I’m immediately north of Silicon Valley itself, so we’re right here in the middle of it all. But we’ve also seen how the large tech companies — some of the wealthiest companies in world history — have been able to stop federal regulation.

Every time I see tech CEOs having dinner at the White House with the aspiring fascist dictator, I have to take a deep breath. These are all really brilliant people who have generated enormous wealth. A lot of folks I represent work for them. It really pains me when I see the deals that are being struck with Saudi Arabia and the United Arab Emirates, and how that money gets funneled into Trump’s meme coin. It causes me deep concern.

I’m not someone who’s anti-tech. I want tech innovation to happen. It’s incredibly important. But this is an industry that we should not trust to regulate itself or make voluntary commitments. And that’s not casting aspersions on anyone. This is capitalism, and it can create enormous prosperity but also cause harm if there are not sensible regulations to protect the public interest. When it comes to AI safety, we’re trying to thread that needle.

SB 53 is focused on the worst harms that AI could imaginably cause — death, massive cyberattacks, and the creation of bioweapons. Why focus there?

The risks of AI are varied. There is algorithmic discrimination, job loss, deep fakes, and scams. There have been various bills in California and elsewhere to address those risks. SB 53 was never intended to cover the field and address every risk created by AI. We’re focused on one specific category of risk, in terms of catastrophic risk.

That issue came to me organically from folks in the AI space in San Francisco — startup founders, frontline AI technologists, and people who are building these models. They came to me and said, “This is an issue that needs to be addressed in a thoughtful way.”

Do you feel that AI systems are inherently unsafe, or have the potential to cause death and massive cyberattacks?

I don’t think they’re inherently safe. I know there are a lot of people working in these labs who care very deeply about trying to mitigate risk. And again, it’s not about eliminating risk. Life is about risk, unless you’re going to live in your basement and never leave, you’re going to have risk in your life. Even in your basement, the ceiling might fall down.

Is there a risk that some AI models could be used to do significant harm to society? Yes, and we know there are people who would love to do that. We should try to make it harder for bad actors to cause these severe harms, and so should the people developing these models.

Anthropic issued its support for SB 53. What are your conversations like with other industry players?

We’ve talked to everyone: large companies, small startups, investors, and academics. Anthropic has been really constructive. Last year, they never formally supported [SB 1047] but they had positive things to say about aspects of the bill. I don’t think [Anthropic] loves every aspect of SB 53, but I think they concluded that on balance the bill was worth supporting.

I’ve had conversations with large AI labs who are not supporting the bill, but are not at war with it in the way they were with SB 1047. It’s not surprising. SB 1047 was more of a liability bill, SB 53 is more of a transparency bill. Startups have been less engaged this year because the bill really focuses on the largest companies.

Do you feel pressure from the large AI PACs that have formed in recent months?

This is another symptom of Citizens United. The wealthiest companies in the world can just pour endless resources into these PACs to try to intimidate elected officials. Under the rules we have, they have every right to do that. It’s never really impacted how I approach policy. There have been groups trying to destroy me for as long as I’ve been in elected office. Various groups have spent millions trying to blow me up, and here I am. I’m in this to do right by my constituents and try to make my community, San Francisco, and the world a better place.

What’s your message to Governor Newsom as he’s debating whether to sign or veto this bill?

My message is that we heard you. You vetoed SB 1047 and provided a very comprehensive and thoughtful veto message. You wisely convened a working group that produced a very strong report, and we really looked to that report in crafting this bill. The governor laid out a path, and we followed that path in order to come to an agreement, and I hope we got there.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleRapid7, Elastic, C3.ai, Wix, and UiPath Shares Are Falling, What You Need To Know
Next Article OpenAI, Oracle, and SoftBank announced five new AI data centers as part of Stargate.
Advanced AI Editor
  • Website

Related Posts

Dedicated mobile apps for vibe coding have so far failed to gain traction

September 24, 2025

Google’s AI Mode arrives in Spanish globally

September 23, 2025

How Google’s dev tools manager makes AI coding work

September 23, 2025

Comments are closed.

Latest Posts

Court Rules ‘Gender Ideology’ Ban on Art Endowments Unconstitutional

Rural Danish Art Museum Acquires Painting By Artemisia Gentileschi

Dan Nadel Is Expanding American Art History, One Outlier at a Time

Bernard Arnault Says French Wealth Tax Will ‘Destroy’ the Economy

Latest Posts

Dedicated mobile apps for vibe coding have so far failed to gain traction

September 24, 2025

GeoPQA: Bridging the Visual Perception Gap in MLLMs for Geometric Reasoning – Takara TLDR

September 24, 2025

OpenAI, Oracle, and SoftBank announced five new AI data centers as part of Stargate.

September 24, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Dedicated mobile apps for vibe coding have so far failed to gain traction
  • GeoPQA: Bridging the Visual Perception Gap in MLLMs for Geometric Reasoning – Takara TLDR
  • OpenAI, Oracle, and SoftBank announced five new AI data centers as part of Stargate.
  • Scott Wiener on his fight to make Big Tech disclose AI’s dangers
  • Rapid7, Elastic, C3.ai, Wix, and UiPath Shares Are Falling, What You Need To Know

Recent Comments

  1. zestysquid7Nalay on OpenAI expects subscription revenue to nearly double to $10bn
  2. Jamesshelm on OpenAI countersues Elon Musk, calls for enjoinment from ‘further unlawful and unfair action’
  3. goofykraken5Nalay on Apple’s Lack Of New AI Features At WWDC Is ‘Startling,’ Expert Says – Apple (NASDAQ:AAPL)
  4. hyfaqamquart on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. zanyflamingo2Nalay on AI as a Service: Top AIaaS Vendors for All Types of Businesses (2025)

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.