Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

First Try Matters: Revisiting the Role of Reflection in Reasoning Models – Takara TLDR

NBA China and Alibaba Cloud announce multiyear collaboration to reimagine fan engagement

India emerging as developer powerhouse for Anthropic’s Claude AI, says Guillaume Princen

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
TechCrunch AI

The fixer’s dilemma: Chris Lehane and OpenAI’s impossible mission

By Advanced AI EditorOctober 11, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Chris Lehane is one of the best in the business at making bad news disappear. Al Gore’s press secretary during the Clinton years, Airbnb’s chief crisis manager through every regulatory nightmare from here to Brussels – Lehane knows how to spin. Now he’s two years into what might be his most impossible gig yet: as OpenAI’s VP of global policy, his job is to convince the world that OpenAI genuinely gives a damn about democratizing artificial intelligence while the company increasingly behaves like, well, every other tech giant that’s ever claimed to be different.

I had 20 minutes with him on stage at the Elevate conference in Toronto earlier this week – 20 minutes to get past the talking points and into the real contradictions eating away at OpenAI’s carefully constructed image. It wasn’t easy or entirely successful. Lehane is genuinely good at his job. He’s likable. He sounds reasonable. He admits uncertainty. He even talks about waking up at 3 a.m. worried about whether any of this will actually benefit humanity.

But good intentions don’t mean much when your company is subpoenaing critics, draining economically depressed towns of water and electricity, and bringing dead celebrities back to life to assert your market dominance.

The company’s Sora problem is really at the root of everything else. The video generation tool launched last week with copyrighted material seemingly baked right into it. It was a bold move for a company already getting sued by the New York Times, the Toronto Star, and half the publishing industry. From a business and marketing standpoint, it was also brilliant. The invite-only app soared to the top of the App Store as people created digital versions of themselves, OpenAI CEO Sam Altman; characters like Pikachu, Mario, and Cartman of “South Park”; and dead celebrities like Tupac Shakur.

Asked what drove OpenAI’s decision to launch this newest version of Sora with these characters, Lehane gave me the standard pitch: Sora is a “general purpose technology” like electricity or the printing press, democratizing creativity for people without talent or resources. Even he – a self-described creative zero – can make videos now, he said on stage.

What he danced around is that OpenAI initially “let” rights holders opt out of having their work used to train Sora, which is not how copyright use typically works. Then, after OpenAI noticed that people really liked using copyrighted images, it “evolved” toward an opt-in model. That’s not really iterating. That’s testing how much you can get away with. (And by the way, though the Motion Picture Association made some noise last week about legal threats, OpenAI appears to have gotten away with quite a lot.)

Naturally, the situation brings to mind the aggravation of publishers who accuse OpenAI of training on their work without sharing the financial spoils. When I pressed Lehane about publishers getting cut out of the economics, he invoked fair use, that American legal doctrine that’s supposed to balance creator rights against public access to knowledge. He called it the secret weapon of U.S. tech dominance.

Techcrunch event

San Francisco
|
October 27-29, 2025

Maybe. But I’d recently interviewed Al Gore – Lehane’s old boss – and realized anyone could simply ask ChatGPT about it instead of reading my piece on TechCrunch. “It’s ‘iterative’,” I said, “but it’s also a replacement.”

For the first time, Lehane dropped his spiel. “We’re all going to need to figure this out,” he said. “It’s really glib and easy to sit here on stage and say we need to figure out new economic revenue models. But I think we will.” (We’re making it up as we go, in short.)

Then there’s the infrastructure question nobody wants to answer honestly. OpenAI is already operating a data center campus in Abilene, Texas, and recently broke ground on a massive data center in Lordstown, Ohio, in partnership with Oracle and SoftBank. Lehane has likened accessibility to AI to the advent of electricity – saying those who accessed it last are still playing catch-up – yet OpenAI’s Stargate project is seemingly targeting some of those same economically challenged places as spots to set up facilities with their massive appetites for water and electricity.

Asked during our sit-down whether these communities will benefit or merely foot the bill, Lehane went to gigawatts and geopolitics. OpenAI needs about a gigawatt of energy per week, he noted. China brought on 450 gigawatts last year plus 33 nuclear facilities. If democracies want democratic AI, they have to compete. “The optimist in me says this will modernize our energy systems,” he’d said, painting a picture of re-industrialized America with transformed power grids.

It was inspiring. But it was not an answer about whether people in Lordstown and Abilene are going to watch their utility bills spike while OpenAI generates videos of John F. Kennedy and The Notorious B.I.G. (Video generation is the most energy-intensive AI out there.)

Which brought me to my most uncomfortable example. Zelda Williams spent the day before our interview begging strangers on Instagram to stop sending her AI-generated videos of her late father, Robin Williams. “You’re not making art,” she wrote. “You’re making disgusting, over-processed hotdogs out of the lives of human beings.”

When I asked about how the company reconciles this kind of intimate harm with its mission, Lehane answered by talking about processes, including responsible design, testing frameworks, and government partnerships. “There is no playbook for this stuff, right?”

Lehane showed vulnerability in some moments, saying that he wakes up at 3. a.m. every night, worried about democratization, geopolitics, and infrastructure. “There’s enormous responsibilities that come with this.”

Whether or not those moments were designed for the audience, I believe him. Indeed, I left Toronto thinking I’d watched a master class in political messaging – Lehane threading an impossible needle while dodging questions about company decisions that, for all I know, he doesn’t even agree with. Then Friday happened.

Nathan Calvin, a lawyer who works on AI policy at a nonprofit advocacy organization, Encode AI, revealed that at the same time I was talking with Lehane in Toronto, OpenAI had sent a sheriff’s deputy to his house in Washington, D.C., during dinner to serve him a subpoena. They wanted his private messages with California legislators, college students, and former OpenAI employees.

Calvin is accusing OpenAI of intimidation tactics around a new piece of AI regulation, California’s SB 53. He says the company weaponized its legal battle with Elon Musk as a pretext to target critics, implying Encode was secretly funded by Musk. In fact, Calvin says he fought OpenAI’s opposition to California’s SB 53, an AI safety bill, and that when he saw the company claim it “worked to improve the bill,” he “literally laughed out loud.” In a social media skein, he went on to call Lehane specifically the “master of the political dark arts.”

In Washington, that might be a compliment. At a company like OpenAI whose mission is “to build AI that benefits all of humanity,” it sounds like an indictment.

What matters much more is that even OpenAI’s own people are conflicted about what they’re becoming.

As my colleague Max reported last week, a number of current and former employees took to social media after Sora 2 was released, expressing their misgivings, including Boaz Barak, an OpenAI researcher and Harvard professor, who wrote about Sora 2 that it is “technically amazing but it’s premature to congratulate ourselves on avoiding the pitfalls of other social media apps and deepfakes.”

On Friday, Josh Achiam – OpenAI’s head of mission alignment – tweeted something even more remarkable about Calvin’s accusation. Prefacing his comments by saying they were “possibly a risk to my whole career,” Achiam went on to write of OpenAI: “We can’t be doing things that make us into a frightening power instead of a virtuous one. We have a duty to and a mission for all of humanity. The bar to pursue that duty is remarkably high.”

That’s . . .something. An OpenAI executive publicly questioning whether his company is becoming “a frightening power instead of a virtuous one,” isn’t on a par with a competitor taking shots or a reporter asking questions. This is someone who chose to work at OpenAI, who believes in its mission, and who is now acknowledging a crisis of conscience despite the professional risk.

It’s a crystallizing moment. You can be the best political operative in tech, a master at navigating impossible situations, and still end up working for a company whose actions increasingly conflict with its stated values – contradictions that may only intensify as OpenAI races toward artificial general intelligence.

It has me thinking that the real question isn’t whether Chris Lehane can sell OpenAI’s mission. It’s whether others – including, critically, the other people who work there – still believe it.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleBillionaire Siebel’s C3.ai boosts IPO price range as investors flock to tech stocks
Next Article Large Scale Diffusion Distillation via Score-Regularized Continuous-Time Consistency – Takara TLDR
Advanced AI Editor
  • Website

Related Posts

Startup Battlefield company SpotitEarly trained dogs and AI to sniff out common cancers and will show off its tech at Disrupt

October 11, 2025

Former UK Prime Minister Rishi Sunak to advise Microsoft and Anthropic 

October 11, 2025

Instagram head Adam Mosseri pushes back on MrBeast’s AI fears but admits society will have to adjust

October 10, 2025

Comments are closed.

Latest Posts

The Rubin Names 2025 Art Prize, Research and Art Projects Grants

Kochi-Muziris Biennial Announces 66 Artists for December Exhibition

Instagram Launches ‘Rings’ Awards for Creators—With KAWS as a Judge

Frieze to Launch Abu Dhabi Fair in November 2026

Latest Posts

First Try Matters: Revisiting the Role of Reflection in Reasoning Models – Takara TLDR

October 11, 2025

NBA China and Alibaba Cloud announce multiyear collaboration to reimagine fan engagement

October 11, 2025

India emerging as developer powerhouse for Anthropic’s Claude AI, says Guillaume Princen

October 11, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • First Try Matters: Revisiting the Role of Reflection in Reasoning Models – Takara TLDR
  • NBA China and Alibaba Cloud announce multiyear collaboration to reimagine fan engagement
  • India emerging as developer powerhouse for Anthropic’s Claude AI, says Guillaume Princen
  • Robots receive major intelligence boost thanks to Google DeepMind’s ‘thinking AI’ — a pair of models that help machines understand the world
  • MIT rejects Trump admin funding compact, citing free expression concerns

Recent Comments

  1. floydmayweather on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. Davidsoine on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. ThomasFum on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. QuantumLeapB8Nalay on AI as a Service: Top AIaaS Vendors for All Types of Businesses (2025)
  5. Colleen on A Visual Theory-of-Mind Benchmark for Multimodal Large Language Models

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.