Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

New MIT CSAIL study suggests that AI won’t steal as many jobs as expected

Carnegie Mellon Debuts Initiative to Combine Disparate AI Research — Campus Technology

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Ex-OpenAI staffers file amicus brief opposing the company’s for-profit transition
TechCrunch AI

Ex-OpenAI staffers file amicus brief opposing the company’s for-profit transition

Advanced AI BotBy Advanced AI BotApril 11, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


A group of ex-OpenAI employees on Friday filed a proposed amicus brief in support of Elon Musk in his lawsuit against OpenAI, opposing OpenAI’s planned conversion from a non-profit to a for-profit corporation.

The brief, filed by Harvard law professor and Creative Commons founder Lawrence Lessig, names twelve former OpenAI employees: Steven Adler, Rosemary Campbell, Neil Chowdhury, Jacob Hilton, Daniel Kokotajlo, Gretchen Krueger, Todor Markov, Richard Ngo, Girish Sastry, William Saunders, Carrol Wainwright, and Jeffrey Wu. It makes the case that, if OpenAI’s non-profit ceded control of the organization’s business operations, it would “fundamentally violate its mission.”

Several of the ex-staffers have spoken out against OpenAI’s practices publicly before. Krueger has called on the company to improve its accountability and transparency, while Kokotajlo and Saunders previously warned that OpenAI is in a “reckless” race for AI dominance. Wainwright has said that OpenAI “should not [be trusted] when it promises to do the right thing later.”

In a statement, an OpenAI spokesperson said that OpenAI’s nonprofit “isn’t going anywhere” and that the organization’s mission “will remain the same.”

“Our board has been very clear,” the spokesperson told TechCrunch via email. “We’re turning our existing for-profit arm into a public benefit corporation (PBC) — the same structure as other AI labs like Anthropic — where some of these former employees now work — and [Musk’s AI startup] xAI.”

OpenAI was founded as a non-profit in 2015, but it converted to a “capped-profit” in 2019, and is now trying to restructure once more into a PBC. When it transitioned to a capped-profit, OpenAI retained its nonprofit wing, which currently has a controlling stake in the organization’s corporate arm.

Musk’s suit against OpenAI accuses the startup of abandoning its non-profit mission, which aimed to ensure its AI research benefits all humanity. Musk had sought a preliminary injunction to halt OpenAI’s conversion. A federal judge denied the request, but permitted the case to go to a jury trial in spring 2026.

According to the ex-OpenAI employees’ brief, OpenAI’s present structure — a nonprofit controlling a group of other subsidiaries — is a “crucial part” of its overall strategy and “critical” to the organization’s mission. Restructuring that removes the nonprofit’s controlling role would not only contradict OpenAI’s mission and charter commitments, but would also “breach the trust of employees, donors, and other stakeholders who joined and supported the organization based on these commitments,” asserts the brief.

“OpenAI committed to several key principles for executing on [its] mission in their charter document,” the brief reads. “These commitments were taken extremely seriously within the company and were repeatedly communicated and treated internally as being binding. The court should recognize that maintaining the nonprofit’s governance is essential to preserving OpenAI’s unique structure, which was designed to ensure that artificial general intelligence benefits humanity rather than serving narrow financial interests.”

Artificial general intelligence, or AGI, is broadly understood to mean AI that can complete any task a human can.

According to the brief, OpenAI often used its structure as a recruitment tool — and repeatedly assured staff that the nonprofit control was “critical” in executing its mission. The brief recounts an OpenAI all-hands meeting toward the end of 2020 during which OpenAI CEO Sam Altman allegedly stressed that the nonprofits’ governance and oversight were “paramount” in “guaranteeing that safety and broad societal benefits were prioritized over short-term financial gains.”

“In recruiting conversations with candidates, it was common to cite OpenAI’s unique governance structure as a critical differentiating factor between OpenAI and competitors such as Google or Anthropic and an important reason they should consider joining the company,” reads the brief. “This same reason was also often used to persuade employees who were considering leaving for competitors to stay at OpenAI — including some of us.”

The brief warns that, should OpenAI be allowed to convert to a for-profit, it might be incentivized to “[cut] corners” on safety work and develop powerful AI “concentrated among its shareholders.” A for-profit OpenAI would have little reason to abide by the “merge and assist” clause in OpenAI’s current charter, which pledges that OpenAI will stop competing with and assist any “value-aligned, safety-conscious” project that achieves AGI before it does, asserts the brief.

The ex-OpenAI employees, some of whom were research and policy leaders at the company, join a growing cohort voicing strong opposition to OpenAI’s transition.

Earlier this week, a group of organizations, including non-profits and labor groups like the California Teamsters, petitioned California Attorney General Rob Bonta to stop OpenAI from becoming a for-profit. They claimed the company has “failed to protect its charitable assets” and is actively “subverting its charitable mission to advance safe artificial intelligence.”

Encode, a non-profit organization that co-sponsored California’s ill-fated SB 1047 AI safety legislation, cited similar concerns in an amicus brief filed in December.

OpenAI has said that its conversion would preserve its non-profit arm and infuse it with resources to be spent on “charitable initiatives” in sectors such as healthcare, education, and science. In exchange for its controlling stake in OpenAI’s enterprise, the nonprofit would reportedly stand to reap billions of dollars.

“We’re actually getting ready to build the best-equipped nonprofit the world has ever seen — we’re not converting it away,” the company wrote in a series of posts on X on Wednesday.

The stakes are high for OpenAI, which needs to complete its for-profit conversion by the end of this year or next or risk relinquishing some of the capital it has raised in recent months, according to reports.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleIs Sam Altman coming back? (OpenAI drama continues)
Next Article Now it’s TikTok parent ByteDance’s turn for a reasoning AI: enter Seed-Thinking-v1.5!
Advanced AI Bot
  • Website

Related Posts

Week in Review: Why Anthropic cut access to Windsurf

June 7, 2025

Humans provide necessary ‘checks and balances’ for AI, says Lattice CEO

June 7, 2025

Building More Scalable GenAI Applications for Startups and Developers

June 7, 2025
Leave A Reply Cancel Reply

Latest Posts

Hugh Jackman And Sonia Friedman Boldly Bid To Democratize Theater

Men’s Swimwear Gets Casual At Miami Swim Week 2025

Original Prototype for Jane Birkin’s Hermes Bag Consigned to Sotheby’s

Viral Trump Vs. Musk Feud Ignites A Meme Chain Reaction

Latest Posts

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

June 7, 2025

New MIT CSAIL study suggests that AI won’t steal as many jobs as expected

June 7, 2025

Carnegie Mellon Debuts Initiative to Combine Disparate AI Research — Campus Technology

June 7, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.