Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Anthropic launches Claude Gov for military and intelligence use

Google DeepMind’s Demis Hassabis Wants to Build AI Email Assistant That Can Reply in Your Style: Report

OpenAI hits 3 million paying enterprise users

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Helen Toner, former OpenAI director: ‘If all AI development stopped now, its impact is already as big as the internet’ | Technology
OpenAI

Helen Toner, former OpenAI director: ‘If all AI development stopped now, its impact is already as big as the internet’ | Technology

Advanced AI BotBy Advanced AI BotJune 2, 2025No Comments9 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


At 33 years old, Helen Toner has had a long and eventful relationship with artificial intelligence (AI). Her defining moment was her entry onto the board of directors of OpenAI in 2021, but more so the way she left, shortly after voting to dismiss Sam Altman as CEO in November 2023. Toner declines to discuss that experience at the company that created ChatGPT, but she has previously shared her opinion: Altman was not honest with the board. “We found out about ChatGPT on Twitter,” Toner said. She also added that the executive had created a “toxic atmosphere” at the company, according to some executives.

That battle ended with Altman’s reinstatement. Toner now closely follows the development of AI in the defense industry and in China, two of her specialties, from the Center for Security and Emerging Technology at Georgetown University (CSET) in Washington. She has already testified before Congress about her vision for the future of AI, which she explains in this video interview.

Question. Why do many users ascribe magical properties to AI?

Answer. I would say there are two reasons. One obvious reason is the way we’ve learned about AI is through science fiction, and science fiction AI is usually infallible. Maybe it’s good, maybe it’s bad, but it always has the right answers and can do everything. Its role in fiction is similar to that of fairies, gods, or supernatural beings. A different answer could be that we have to change how we think about interacting with computers, because in the interaction we’re used to with computers, they’re very reliable. A calculator always gives you the right answer, it never stumbles on the third line of a long division. But [AI] are non-deterministic systems, they give different answers, they’re just pattern matching and they’re not carrying out this perfectly specified algorithm to solve your problems, it’s quite new and quite different.

Q. How far will AI go?

A. People talk as though there’s a finish line, which is AGI, artificial general intelligence. If you try and dig into what that is, a lot of people imagine that it’s when AI is kind of like a human, as good as a human. But we know that AI is never going to have the same skill profile as a human. There are already things at which AI is much better than us, and others at which it is worse. Even if we achieve AI as good as a human, it will still be different. I don’t think we will ever have robots that are as good as humans at social dancing, like salsa or swing dancing where you’re improvising live with another human. It doesn’t make sense to build something like that. But we will have very powerful AI systems, capable of surpassing humans in strategic or intellectual tasks. We talk as though there is an obvious destination, but in reality we don’t know what the future will look like for AI systems to become more advanced.

Q. What is your main concern with AI?

A. I think there are many areas of concern. The bad scenarios that seem most likely to me are in the category where we choose to give more and more autonomy to the AI systems and we choose to embed them more and more in society and in the economy and the military. That would be step one. Step two could be many different scenarios. It could be that AIs start cooperating with each other and eventually take control of humans. It could be that we live in a world that kind of seems nice but it’s meaningless: scrolling TikTok all day and having nice junk food to eat but without very meaningful lives. Or that a small number of people start collaborating with the AIs, take control, and we end up in a totalitarian world, with a very concentrated kind of power.

Q. Do you really think that can happen?

A. I think it could go very, very badly. I think there’s a much wider range of things that could go very, very wrong than just “everyone dies.” Maybe everyone will die, but that seems like a very specific scenario to focus on.

Helen Toner
Helen Toner, in a provided image.BASSEM MOUSSA

Q. We’re already seeing studies where AI is capable of manipulation. Are you concerned?

A. It depends on what they’re trying to convince you of. A kind of persuasion that seems more realistic to me is AI-based cults or AI-based religions, where people believe that the AI is their friend and is looking out for them, and is a wise being they should follow and listen to. The history of human cults tells us that it’s pretty easy to found one and get people to do some pretty crazy things. So far, they’ve usually involved a small number of people. I’m sure we’re going to see AI-based cults.

Q. It doesn’t seem a trivial matter to create an AI capable of leading a cult.

A. It’s going to be interesting. At a minimum there’s going to be a lot of people who have relationships with their AI partners.

Q. How advanced is China in AI?

A. It depends what part of the AI you’re looking at. In more advanced systems, the U.S. is still ahead. I work in national security, and I think it’s harder to tell there. How AI is applied within the military will matter more. I would expect the U.S. to be doing better than China, but it’s not easy to tell, and it’s a different kind of problem than building more advanced models. When we’re talking about frontier models, that’s also hard to measure. I don’t know if I’d say China is a year or two behind. The most recent data we have is the release of DeepSeek V3. DeepSeek released two models, and a lot of people were excited about R1, the reasoning model. But among people working in AI, a lot of people think the other one, V3, is more impressive. They released that one in December, and it was probably six to nine months behind the equivalent model in the U.S., and the reasoning model was about three or four months behind. But since then, U.S. companies have continued to make progress, and China still hasn’t responded. They’re currently six to twelve months behind, but that gap could widen with the implementation of chip export controls.

Q. How afraid should we be of military AI?

A. AI is a lot of different things. It’s very natural and very reasonable for militaries to be using AI. All militaries have a huge administrative component, with human resources and finance, so it makes perfect sense for them to use it like businesses. It makes sense to use AI to process when you have far more imagery than your human analysts can look at. They’ve also tried to use it for what they call “predictive maintenance”: for example, if you have a bunch of helicopters and you’re trying to figure out when you should be doing maintenance on them. It can better predict when a motor is going to burn out or when a different part needs to be replaced.

Q. And autonomous weapons?

A. The debates about AI and the military focus too much on that topic. And it’s only a small part of how AI could be used in the military. Focusing only on autonomous weapons is not the best approach. I tend to think that the more important thing for us to focus on is our existing laws of armed conflict, on international humanitarian law. Are you distinguishing between combatants and civilians? Are you responding proportionately to attacks? When we imagine scenarios where AI and warfare go very wrong, it’s often because, for example, civilians are being attacked instead of combatants. And that is already illegal under international law.

Q. Will the AI business model always be subscription-based?

A. I think the big money will come from businesses, not consumers. Because it’s a multipurpose technology, not just a fun chatbot that people carry in their pockets, it will be a very powerful tool for all kinds of economic productivity and innovation. So I imagine the revenue on the business side will be very large.

Q. If AI doesn’t advance further, does it already have the capacity to change the economy?

A. Yes. General-purpose technologies, like AI, take decades to integrate into the economy. If all AI development stopped now, I don’t know if it would be as huge of a transformation as people are forecasting, but I think certainly as big as the internet. The transition from having no internet to having it took decades. Even if we stopped now, there would still be a huge amount of work to do to integrate AI into healthcare, education, business processes, law, and finance.

Q. And have we already reached the ceiling of what AI can do?

A. I wouldn’t bet on that. The story of the last 15 years in AI hasn’t been one of huge innovations that completely change how AI is built, but rather small or medium-sized improvements they can stack on top of each other to keep getting more and more improvements. Right now, we have these large language models, chatbots, that we’ve become accustomed to. But companies are pushing in two main directions. One, reasoning models: teaching them to think step by step so they can work through harder problems. And two, agents: teaching them not just to chat with you in a little browser window, but to go out and do things for you and take actions and be productive. We’ll see big improvements there over the next 25 years.

Q. It is impossible to predict in which direction.

A. AI will change how the whole political economy of society works. We’re in an unusual moment right now. There are many democratic countries where the people are in control in some meaningful way. But that seems to be starting to slip in some important countries. AI will reconfigure the balance of power, for example, in the labor market. How much power do workers have if they strike? How many soldiers are needed in an army if you have certain kinds of AI? Labor power will presumably decrease with AI. If we can end up in a world where people have their basic needs met and they have access to AI, that could be a wonderful world. But I do worry that the basic condition of humanity for most of history is “might makes right” and whoever has the power gets to do what they want, and people who don’t have power lead much worse lives.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleA Comprehensive Dataset and Benchmark for Battery Life Prediction
Next Article Microsoft brings free Sora AI video generation to Bing
Advanced AI Bot
  • Website

Related Posts

OpenAI hits 3 million paying enterprise users

June 6, 2025

OpenAI hits 3 million paying enterprise users

June 6, 2025

OpenAI hits 3 million paying enterprise users

June 5, 2025
Leave A Reply Cancel Reply

Latest Posts

Netflix, Martha Stewart, T.O.P And Lil Yachty Welcome You To The K-Era

Closed SFAI Campus to Be Converted into Artist Residency Center

At Gearbox Records The Sound Quality Remains First

Natasha Lyonne Sparks Backlash After Quoting David Lynch

Latest Posts

Anthropic launches Claude Gov for military and intelligence use

June 6, 2025

Google DeepMind’s Demis Hassabis Wants to Build AI Email Assistant That Can Reply in Your Style: Report

June 6, 2025

OpenAI hits 3 million paying enterprise users

June 6, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.