Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Paper page – TTS-VAR: A Test-Time Scaling Framework for Visual Auto-Regressive Generation

Build an intelligent eDiscovery solution using Amazon Bedrock Agents

Open-source AI is free, but most people still can’t use it

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
OpenAI

AI Impersonating Humans Terrifies Sam Altman, But OpenAI Doesn’t Want More Regulation

By Advanced AI EditorJuly 24, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


OpenAI CEO Sam Altman is in Washington, D.C., this week, to lobby the U.S. government for friendly regulation around the development and use of AI. Reports before Altman’s visit claimed the CEO would make it clear to politicians that ChatGPT is already an incredibly important tool for American and worldwide users alike. Altman would acknowledge AI’s threat to jobs, but focus on the long-term benefits of democratizing AI, taking a middle-of-the-road approach.

OpenAI’s agenda is obvious. The company needs support from the government to create even better versions of ChatGPT on the road to superintelligence. That means anything from incentives to build AI infrastructure in the U.S. to laxer AI regulation that would support faster progress in the field. The latter point actually contradicts Altman’s remarks at the Federal Reserve on Tuesday.

The CEO admitted he’s terrified of AI impersonating humans, warning of an AI “fraud crisis.” He’s also worried about malicious actors developing and misusing AI superintelligence before the world can build systems to protect itself.

What’s An AI Fraud Crisis?

PayPal app on an iPhone next to credit cards and cash.

PayPal app on an iPhone next to credit cards and cash. – Bohdan Aleksandrovych/Shutterstock

“A thing that terrifies me is apparently there are still some financial institutions that will accept a voice print as authentication for you to move a lot of money or do something else — you say a challenge phrase, and they just do it,” Altman said, per CNN. “That is a crazy thing to still be doing … AI has fully defeated most of the ways that people authenticate currently, other than passwords.”

While ChatGPT offers an Advanced Voice Mode that lets you talk to the AI via voice, it can’t be used to replicate a person’s voice. However, other AI services might support such functionality, which could enable abuse. There’s also a variety of open-source AI models that malicious actors might install on their devices and then figure out ways to have them clone the voices of real people. Such AI-based scams exist, with attackers using them to extract information and money from unsuspecting victims.

As Altman warned, it’s not just our voices that bad actors might be able to clone with AI tools. “I am very nervous that we have an impending, significant, impending fraud crisis,” Altman said. “Right now, it’s a voice call; soon it’s going to be a video or FaceTime that’s indistinguishable from reality.”

Again, ChatGPT doesn’t offer such functionality, nor do other chatbots. But AI technology to create lifelike images and videos already exists. Some tools even let people create videos from a still image — Google’s Veo 3 in Gemini is one example. But these products have built-in protections to prevent scenarios like the ones Altman is describing.

What Does OpenAI Want?

A concept image for ChatGPT's reasoning abilities.

A concept image for ChatGPT’s reasoning abilities. – OpenAI

Altman being vocal about the dangers of AI is something to appreciate. He’s not painting a rosy picture where AI is the solution to everything. AI can be misused for nefarious purposes. Altman also said that the idea of malicious actors abusing AI superintelligence before the world can protect itself is one thing that keeps him up at night.

These concerns echo Altman’s remarks at the end of OpenAI’s ChatGPT Agent livestream last week. He explained that the new AI agent opens the door to abuse. While OpenAI has built protections into the feature to prevent bad actors from tricking the AI into revealing personal information about the user, there’s always a chance that sophisticated attacks might break through in the future.

Despite those nightmare-inducing fears, OpenAI isn’t advocating for stronger oversight from the government. OpenAI’s proposals for the U.S. AI Action Plan call for laxer regulation so U.S. AI firms can compete with foreign rivals, especially Chinese companies. “We propose a holistic approach that enables voluntary partnership between the federal government and the private sector, and neutralizes potential PRC benefit from American AI companies having to comply with overly burdensome state laws.”

For example, OpenAI wants the U.S. government to allow AI firms to train frontier models on copyrighted material while securing the rights of content creators. OpenAI has also called for tighter export regulation of AI tech, support for improved infrastructure, and the U.S. government actively using AI products.

How To Protect Yourself

The ChatGPT prompt composer.

The ChatGPT prompt composer. – YouTube

Stricter AI laws that might prevent some of the abuse that Altman mentioned during his interview at the Federal Reserve. The world might not be on the precipice of a “fraud crisis” with better AI laws to protect users. On the other hand, laxer AI development laws in other jurisdictions would still enable the creation of tools bad actors can use to impersonate people.

Internet users should be aware of the threats AI poses, whether they use ChatGPT or other chatbots. They should avoid sending personal data and money to third parties without verifying the authenticity of their claims. Do not PayPal or Venmo money to someone claiming to be a friend or family member during a phone call without checking if they’re using AI to spoof someone’s voice or appearance. On that note, PayPal is already using AI to prevent scams via PayPal and Venmo payments.

Also, do not give AI chatbots personal information, especially more advanced tools like ChatGPT Agent. Always be involved in the process of making purchases online, even if a chatbot adds the products to your basket. Finally, keep yourself informed. In the future, we might have tools to prove to financial and health institutions that you’re human. As CNN points out, Altman is backing such a tool. It’s called The Orb, and it should offer proof of humanity in the world of AI.

Read the original article on BGR.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleTesla gives a massive update on its affordable model plans
Next Article Nvidia’s AI PC delay may be just what the industry needs
Advanced AI Editor
  • Website

Related Posts

Samsung has its eye on Perplexity and OpenAI as it plans to expand beyond Gemini

July 25, 2025

Japan’s Legal AI Startup Scores $50 Million Round Led By Goldman Sachs, Partners With OpenAI

July 25, 2025

All eyes on August as rumors suggest OpenAI is preparing to launch GPT-5

July 25, 2025

Comments are closed.

Latest Posts

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Artist Loses Final Appeal in Case of Apologising for ‘Fishrot Scandal’

US Appeals Court Overturns $8.8 M. Trademark Judgement For Yuga Labs

Old Masters ‘Making a Comeback’ in London: Morning Links

Latest Posts

Paper page – TTS-VAR: A Test-Time Scaling Framework for Visual Auto-Regressive Generation

July 25, 2025

Build an intelligent eDiscovery solution using Amazon Bedrock Agents

July 25, 2025

Open-source AI is free, but most people still can’t use it

July 25, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Paper page – TTS-VAR: A Test-Time Scaling Framework for Visual Auto-Regressive Generation
  • Build an intelligent eDiscovery solution using Amazon Bedrock Agents
  • Open-source AI is free, but most people still can’t use it
  • Alibaba’s Qwen3-Coder Shows Why China’s Open Models Can’t Be Ignored Anymore
  • Samsung has its eye on Perplexity and OpenAI as it plans to expand beyond Gemini

Recent Comments

  1. Sign up to get 100 USDT on The Do LaB On Capturing Lightning In A Bottle
  2. binance Anmeldebonus on David Patterson: Computer Architecture and Data Storage | Lex Fridman Podcast #104
  3. nude on Brain-to-voice neuroprosthesis restores naturalistic speech
  4. Dennisemupt on Local gov’t reps say they look forward to working with Thomas
  5. checkСarBig on How Cursor and Claude Are Developing AI Coding Tools Together

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.