Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Startup founded by former DeepMind researchers Reflection AI raises $2 billion

IBM can’t avoid arbitrating former worker’s age bias charge, court says

MIT president says she ‘cannot support’ proposal to adopt Trump priorities for funding benefits

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Voice/Audio Generation

Scammers know how to make it sound like someone you know is calling

By Advanced AI EditorDecember 19, 2024No Comments9 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Susan Tompor
 |  Detroit Free Press

play

5 scams that seniors need to avoid

With the emergence of AI and other new technologies, people have become more susceptible to scams online, especially older people.

unbranded – Lifestyle

Just as you’re ready to mingle and jingle, it’s time for a warning about how a holiday-themed TikTok or Facebook reel that you post now could end up being used by scammers with AI-cloning tools to steal money from Grandma.

Even more scary, the same could be said about that friendly message you’re leaving on your voicemail. Yep, we’re now being told that it’s wise to ditch the “Hi this is Sally, can’t come to the phone right now” custom message and go with the boring, pre-recorded default greeting offered on your cell phone that uses a voice that isn’t yours.

It’s not exactly the cheery kind of stuff we want to hear as the calendar moves closer into 2025. But it’s not exactly the kind of message we can afford to ignore, either.

Artificial intelligence tools can replicate our voices

Cyber criminals have a few new tools that experts say will open up the door for even more fraud in the next few years — AI-powered voice and video cloning techniques.

Scammers want our voices and videos so that they can do a more convincing job of impersonating us when they’re out to steal money. Such cloning can be wrongly used when crooks make a call pretending to be a grandson who claims to need money to get out of jail, a boss who wants you to pay some mysterious invoice, a romantic interest met on social media and a host of others.

The FBI is warning that artificial intelligence tools pose an escalating threat to consumers and businesses as cyber criminals using AI to conduct sophisticated phishing and social engineering attacks.

Michigan Attorney General Dana Nessel in early December warned residents that rapid advancements in AI are being misused to create “deepfake audio and video scams so realistic that they can even fool those who know us best.”

We’re not hearing from local law enforcement about a ton of such voice-impersonation scams taking place yet. But experts say people need to be prepared for an onslaught of activity and take precautions.

Those operating sophisticated fraud rings only need roughly three seconds of your voice to duplicate who you are — replicating the pitch of your voice, your tone, the pace at which you speak — when the crooks use some readily available, low-cost AI tools, according to Greg Bohl, chief data officer for Transaction Network Services. The company provides services to the telecommunications industry, including cell phone companies. Bohl’s work focuses on developing AI technologies that can be used to combat fraud.

Many times, Bohl said, criminals will take information that’s already readily available on social media or elsewhere, such as your cell phone, to clone a voice.

“The longer the greeting, the more accurate they can be with that voice replication,” Bohl told me via a video conference call.

He called a 30-second snippet on a voicemail or a social media post a “gold mine for bad actors.”

Many scams already spoof a legitimate phone number to make it appear like the call is coming from a well-known business or government agency. Often, real names are even used to make it seem like you’re really hearing from someone who works at that agency or business.

But this new AI-cloning development will take scams to an entirely new level, making it harder for consumers to spot fraudulent robocalls and texts.

The Federal Communications Commission warns that AI can be used to “make it sound like celebrities, elected officials, or even your own friends and family are calling.” The FCC has been working, along with state attorney generals, to shut down illegal AI voices and texts.

Cyber crooks do their research to sound real

People unknowingly make the problem worse with social media posts by identifying family members — say your son Leo or your daughter Kate — in videos or photos.

The crooks, of course, need to know who cares about you enough to try to help you in an emergency. So, the scammers first must identify who they might target among your real friends and family before staging a crisis call to ask for money.

During the holidays, Bohl said, anything you do on social media to connect with families and friends can trigger some risk and make you more open to fraud.

His top two tips:

No. 1: switch to automated voicemail.

No. 2: Create a family “safe word.”

Scam calls will sound even more real using replicated voices of those we know, experts say. So, we will want to be able to calmly figure out if we’re talking to a crook. You want a safe word or security question in place long before any of these calls start.

Questions can help, such as: What five tricks can the dog do in the morning? What was your favorite memory as a child? What was the worst golf score you ever posted? You want something that a scammer won’t be able to easily guess — or quickly look up online. (And if you don’t have a dog or play golf, well, you might have a good trick question there.)

“We can expect a significant uptick in AI-powered fraudulent activities by 2025,” said Katalin Parti, an associate professor of sociology and a cybercrime expert at Virginia Tech.

The combination of social media and generative AI will create more sophisticated and dangerous attacks, she said.

As part of the fraud, she said, scammers also can create robocalls to collect voice samples from potential victims. It can be best not to engage in these types of calls, even by responding with a simple “hello.”

Parti gives more tips: Don’t contact any telephone number received via pop-up, text or email. Do not answer cold calls, even if you see a local area code. If you do not recognize the caller but you decide to answer the call anyhow, let the caller talk first.

AI voice-cloning is a significant threat as part of financial scams targeting older adults, as well as for misinformation in political campaigns, according to Siwei Lyu, professor of computer science and engineering at the University of Buffalo and director of the UB Media Forensic Lab.

What’s troubling, he said, is that AI-generated voices can be extremely difficult to detect, especially when they are played over the phone and when the message can elicit emotional reactions such as when you think a close family member is hurt.

Take time to step back and doublecheck if the call is real, Lyu said, and listen carefully to other clues to detect an AI-generated sound.

“Pay attention to abnormal characteristics, such as overly quiet background, lack of emotional tone in the voice or even the lack of breathing in between utterances,” he said.

New tools can make a scam phone call more convincing

But remember, new technology is evolving. Today, more types of phishing emails and texts look legitimate, thanks to AI.

The old saw, for example, which suggests you just need to look for bad grammar or spelling mistakes to spot a fake email or text could prove useless one day, as AI tools assist foreign criminals in translating the phrases they’re using to target U.S. businesses and consumers.

Among other things, the FBI warned that cyber crooks could:

Generate short audio clips containing a loved one’s voice to impersonate a grandchild or other relative who was arrested, hurt in a car accident or facing some other crisis. When the voice sounds like someone you know, you might be more likely to panic and give into a request for bail money or even a demand for a ransom. And you might be more willing to take swift action when a call from your “boss” demands that buy gift cards for Best Buy to pay a particular invoice. Be skeptical.Crooks could use AI-generated audio clips of individuals and impersonate them to gain access to bank accounts.Scammers can be expected to use realistic videos for private communications to “prove” the online contact is a “real person.”

Many times, we cannot even imagine how cyber criminals thousands of miles away could know how our voices sound. But much is out there already — more than even a simple voicemail message.

More: ‘Selling fast’ or just a sales tactic? The truth about online alerts fueling impulse buys

More: Consumers could see a $5 overdraft fee in 2025 under a final rule — or maybe, not

School events are streamed. Business conferences are available online. Sometimes, our jobs require that we post information online to market the brand.

And “there’s growing concern that bad guys can hack into voicemail systems or even phone companies to steal voicemail messages that might be left with a doctor’s office or financial advisor,” said Teresa Murray, who directs the Consumer Watchdog office for U.S. PIRG, a nonprofit advocacy group.

Such threats become more real, she said, in light of incidents such as the massive data breach suffered by National Public Data, which aggregates data to provide background checks. The breach was announced in August.

Yep, it’s downright sickening.

Murray said the proliferation of scams makes it essential to have conversations with our loved ones to make sure everyone understands that computers can impersonate voices of people we know.

Talk about how you cannot trust Caller ID to show that a legitimate government agency is calling you, too.

Don’t be afraid to just hang up

Michigan Attorney General Nessel’s alert about potential holiday scams using artificial intelligence recommended that:

Families should agree on a “code word” or key phrase that only your family would know to confirm an identity during a suspicious call.Be ready to hang up. If something feels off, just hang up.Call someone to verify their identity. Use a phone number that you know is real.Do not hand over money easily. Scammers often demand that you pay them with cryptocurrency, gift cards or money transfers. But once you send that money, it’s hard to trace or reverse.

Contact personal finance columnist Susan Tompor: stompor@freepress.com. Follow her on X (Twitter) @tompor.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGoogle ushers in the agentic AI era
Next Article Should Instructors Ask Students to Show Document Histories to Guard Against AI Cheating?
Advanced AI Editor
  • Website

Related Posts

Core AI Expands Leadership in AI Audio with Launch of VoicePix 2.0

October 9, 2025

AI now sounds more like us – should we be concerned? | Crime News

October 6, 2025

New York Court Tackles the Legality of AI Voice Cloning | Insights

September 19, 2025

Comments are closed.

Latest Posts

Frieze to Launch Abu Dhabi Fair in November 2026

Jeff Koons Returns to Gagosian with First New York Show in Seven Years

Ancient Egyptian Iconography Found in Roman-Era Bathhouse in Turkey

London Gallery Harlesden High Street Goes to Mayfair For a Pop-up

Latest Posts

Startup founded by former DeepMind researchers Reflection AI raises $2 billion

October 10, 2025

IBM can’t avoid arbitrating former worker’s age bias charge, court says

October 10, 2025

MIT president says she ‘cannot support’ proposal to adopt Trump priorities for funding benefits

October 10, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Startup founded by former DeepMind researchers Reflection AI raises $2 billion
  • IBM can’t avoid arbitrating former worker’s age bias charge, court says
  • MIT president says she ‘cannot support’ proposal to adopt Trump priorities for funding benefits
  • Instagram head Adam Mosseri pushes back on MrBeast’s AI fears but admits society will have to adjust
  • Strategies for Diversity Recruiting | Recruiting News Network

Recent Comments

  1. StarBlazerX6Nalay on AI as a Service: Top AIaaS Vendors for All Types of Businesses (2025)
  2. https://Aksesory.com on Chinese Firms Have Placed $16B in Orders for Nvidia’s (NVDA) H20 AI Chips
  3. wett Tipps ai kosten on AI Competitive Self-Play | Two Minute Papers #205
  4. wettquoten heute on Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation
  5. lkjhKr on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.