Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Cornell–IBM Collaboration Advances Quantum Computing

China’s Manus AI shifts global HQ to Singapore

MiniMax, the ‘world-class’ AI start-up lauded by Jensen Huang, applies for Hong Kong IPO

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Center for AI Safety

California’s Draft AI Law Would Protect More than Just People

By Advanced AI EditorJuly 16, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Few places in the world have more to gain from a flourishing AI industry than California. Few also have more to lose if the public’s trust in the industry were suddenly shattered.

In May, the California Senate passed SB 1047, a piece of AI safety legislation, in a vote of 32 to one, helping ensure the safe development of large-scale AI systems through clear, predictable, common-sense safety standards. The bill is now slated for a state assembly vote this week and, if signed into law by Governor Gavin Newsom, would represent a significant step in protecting California citizens and the state’s burgeoning AI industry from malicious use.

Late Monday, Elon Musk shocked many by announcing his support for the bill in a post on X. “This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” he wrote. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public.”

The post came days after I spoke with Musk about SB 1047. Unlike other corporate leaders who often waver, consulting their PR teams and lawyers before taking a stance on safety legislation, Musk was different. After I outlined the importance of the bill, he requested to review its text to ensure its fairness and lack of potential for abuse. The next day he came out in support. This quick decision-making process is a testament to Musk’s long-standing advocacy for responsible AI regulation.

Last winter, Senator Scott Weiner, the bill’s creator, reached out to the Center for AI Safety (CAIS) Action Fund for technical suggestions and cosponsorship. As CAIS’s founder, my commitment to transformative technologies impacting public safety is our mission’s cornerstone. To preserve innovation, we must anticipate potential pitfalls, because an ounce of prevention is worth a pound of cure. Recognizing SB 1047’s groundbreaking nature, we were thrilled to help and have advocated for its adoption ever since.

Read More: Exclusive: California Bill Proposes Regulating AI at State Level

Targeted at the most advanced AI models, it will require large companies to test for hazards, implement safeguards, ensure shutdown capabilities, protect whistleblowers, and manage risks. These measures aim to prevent cyberattacks on critical infrastructure, bioengineering of viruses, or other malicious activities with the potential to cause widespread destruction and mass casualties

Anthropic recently warned that AI risks could emerge in “as little as 1-3 years,” disputing critics who view safety concerns as imaginary. Of course, if these risks are indeed fictitious, developers shouldn’t fear liability. Moreover, developers have pledged to tackle these issues, aligning with President Joe Biden’s recent executive order, reaffirmed at the 2024 AI Seoul Summit.

Enforcement is lean by design, allowing California’s Attorney General to act only in extreme cases. There are no licensing requirements for new models, nor does it punish honest mistakes or criminalize open sourcing—the practice of making software source code freely available. It wasn’t drafted by Big Tech or those focused on distant future scenarios. The bill aims to prevent frontier labs from neglecting caution and critical safeguards in their rush to release the most capable models.

Like most AI safety researchers, I am in large part driven by a belief in its immense potential to benefit society, and deeply concerned about preserving that potential. As a global leader in AI, California is too. This shared concern is why state politicians and AI safety researchers are enthusiastic about SB 1047, as history tells us that a major disaster, like the nuclear one at Three Mile Island on March 28, 1979, could set a burgeoning industry back decades.

Regulatory bodies responded to the partial nuclear meltdown by overhauling nuclear safety standards and protocols. These changes increased the operational costs and complexity of running nuclear plants, as operators invested in new safety systems and complied with rigorous oversight. The regulatory challenges made nuclear energy less appealing, halting its expansion over the next 30 years.

Three Mile Island led to a greater dependence on coal, oil, and natural gas. It is often argued that this was a significant lost opportunity to advance toward a more sustainable and efficient global energy infrastructure. While it remains uncertain whether stricter regulations could have averted the incident, it is clear that a single event can profoundly impact public perception, stifling the long-term potential of an entire industry.

Some people will view any government action on industry with suspicion, considering it inherently detrimental to business, innovation, and a state or country’s competitive edge. Three Mile Island demonstrates this perspective is short-sighted, as measures to reduce the chances of a disaster are often in the long-term interest of emerging industries. It is also not the only cautionary tale for the AI industry.

When social media platforms first emerged, they were largely met with enthusiasm and optimism. A 2010 Pew Research Center survey found that 67% of American adults who used social media believed it had a mostly positive impact. Futurist Brian Solis captured this ethos when he proclaimed, “Social media is the new way to communicate, the new way to build relationships, the new way to build businesses, and the new way to build a better world.”

He was three-fourths correct.

Driven by concerns over privacy breaches, misinformation, and mental health impacts, public perception of social media has flipped, with 64% of Americans viewing it negatively. Scandals like Cambridge Analytica eroded trust, while fake news and polarizing content highlighted social media’s role in societal division. A Royal Society for Public Health study showed 70% of young people experienced cyberbullying, with 91% of 16-24-year-olds stating social media harms their mental wellbeing. Users and policymakers around the globe are increasingly vocal about needing stricter regulations and greater accountability from social media companies.

This did not happen because social media companies are uniquely evil. Like other emerging industries, the early days were a “wild west” where companies rushed to dominate a burgeoning market and government regulation was lacking. Platforms with addictive, often harmful content thrived, and we are now all paying the price. The companies—increasingly mistrusted by consumers and in the crosshairs of regulators, legislators, and courts—included.

The optimism surrounding social media wasn’t misplaced. The technology did have the potential to break down geographical barriers and foster a sense of global community, democratize information, and facilitate positive social movements. As the author Erik Qualman warned, “We don’t have a choice on whether we do social media, the question is how well we do it.”

The lost potential of social media and nuclear energy was tragic, but it’s nothing compared to squandering AI’s potential. Smart legislation like SB 1047 is our best tool for preventing this while protecting innovation and competition.

The history of technological regulation showcases our capacity for foresight and adaptability. When railroads transformed 19th-century transportation, governments standardized track gauges, signaling, and safety protocols. The advent of electricity led to codes and standards preventing fires and electrocutions. The automobile revolution necessitated traffic laws and safety measures like seat belts and airbags. In aviation, bodies like the FAA established rigorous safety standards, making flying the safest form of transportation.

History can only provide us with lessons. Whether to heed them is up to us.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticlePaper page – OpenCodeReasoning-II: A Simple Test Time Scaling Approach via Self-Critique
Next Article Fugitive capybara caught by Chinese zoo after 2 months on the run
Advanced AI Editor
  • Website

Related Posts

Northwest Seaport Alliance launches zero-emission drayage truck incentive program

July 12, 2025

Dan Hendrycks: The 100 Most Influential People in AI 2023

July 8, 2025

Researchers alarmed as AI begins to lie, scheme and threaten

July 7, 2025

Comments are closed.

Latest Posts

Justin Sun, Billionaire Banana Buyer, Buys $100 M. of Trump Memecoin

WeTransfer Changes Terms of Service After Criticism on Licensing

Artist is Turning Greyhound Bus into Museum of the Great Migration

The Artists and Art Pros Who Donated to Cuomo and Mamdani’s Campaigns

Latest Posts

Cornell–IBM Collaboration Advances Quantum Computing

July 16, 2025

China’s Manus AI shifts global HQ to Singapore

July 16, 2025

MiniMax, the ‘world-class’ AI start-up lauded by Jensen Huang, applies for Hong Kong IPO

July 16, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Cornell–IBM Collaboration Advances Quantum Computing
  • China’s Manus AI shifts global HQ to Singapore
  • MiniMax, the ‘world-class’ AI start-up lauded by Jensen Huang, applies for Hong Kong IPO
  • Exclusive: Krisp launches VIVA development kit to enhance accuracy for voice AI agents
  • OpenAI’s $10M+ AI Consulting Business: Deployment Takes Center Stage

Recent Comments

  1. inscreva-se na binance on Your friend, girlfriend, therapist? What Mark Zuckerberg thinks about future of AI, Meta’s Llama AI app, more
  2. Duanepiems on Orange County Museum of Art Discusses Merger with UC Irvine
  3. binance on VAST Data Unlocks Real-Time, Multimodal AI Agent Intelligence With NVIDIA
  4. ⛏ Ticket- Operation 1,208189 BTC. Assure => https://graph.org/Payout-from-Blockchaincom-06-26?hs=53d5900f2f8db595bea7d1d205d9c375& ⛏ on Were RNNs All We Needed? (Paper Explained)
  5. 📗 + 1.333023 BTC.NEXT - https://graph.org/Payout-from-Blockchaincom-06-26?hs=ec6999251b5fd7a82cd3e6db8f19412e& 📗 on OpenAI is pushing for industry-specific AI benchmarks – why that matters

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.