Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

OpenAI and Anthropic researchers decry ‘reckless’ safety culture at Elon Musk’s xAI

People Analytics and Hard Truths

Meta Picks Up Voice Startup Play AI as It Rebuilds Around GenAI

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Voice/Audio Generation

How to protect yourself from AI voice cloning scams, deepfakes

By Advanced AI EditorMay 29, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


The terrifying reality of a growing scam where fraudsters now use AI-generated voice clones to impersonate relatives and extort money from unsuspecting victims is rapidly gaining momentum. With just a short audio clip pulled from your social media posts or past phone calls, scammers can mimic your voice convincingly.

In today’s digital world, deepfake technology and voice cloning are rapidly evolving, courtesy of artificial intelligence (AI). With increasing innovation in AI, any misuse could result into manipulation of digital content and thus, leading to fraud or loss of and damage to personal or organisational reputation.

“Deepfake and voice cloning are interconnected yet distinct concepts,” says Mert Çıkan, Product Owner at SESTEK.

“Deepfake technology involves employing advanced AI algorithms to produce media—videos, audios, images, and text—that deceptively appear realistic, despite being fabricated or altered. Voice cloning, on the other hand, is a unique subset within deepfake technology, focusing on audio manipulation. With this technique audio content can be synthesized that sounds like a specific individual,” he says.

In recent years, there have been multiple cases of audio deepfakes and voice cloning incidents in Nigeria and across the world.

For instance, in April 2023, an audio clip surfaced online purportedly featuring Labour Party presidential candidate, Peter Obi, referring to the 2023 election as a “religious war” in an alleged conversation with the presiding Bishop of Winners Chapel International, David Oyedepo. Obi denied the authenticity of the “Yes Daddy” audio leak, labeling it as “fake” and “doctored,” and suggested it was an attempt to discredit him prior to Nigeria’s 2023 general election.

READ THIS: How to protect yourself from online scams – Study

Another audio deepfake circulated online, allegedly capturing a conversation between former Nigerian president, Olusegun Obasanjo; Nigerian musician, Charly Boy and former Cross River State governor, Donald Duke. The audio clip suggested that Obasanjo was urging protests against the 2023 election results. However, a fact-check by TheCable concluded that the audio was doctored and did not meet the authenticity threshold when compared with verified samples of Obasanjo’s voice.​

On the global scene, a man in Los Angeles, United States was swindled out of $25,000 after fraudsters used AI to replicate his son’s voice, convincing him of a fabricated emergency. ​

A 2023 Guardian newspaper report revealed that AI scams now explore cloning to defraud consumers, following a 264% increase in attacks. The report quoted the Southern African Fraud Prevention Service (SAFPS) saying that impersonation attacks increased by 264 per cent for the first five months of 2023 compared to 2021. The report also highlighted the increasing use of AI in fraudulent activities, noting that​ “cybercriminals are leveraging Artificial Intelligence (AI) through cloning to defraud unsuspecting consumers.”

The FactCheckHub spoke with at least two leading experts combatting information disorder in Africa – Lee Nwiti, Chief Editor at Africa Check in South Africa and Silas Jonathan, Digital Investigation Manager at the Digital Technology, Artificial Intelligence, and Information Disorder Analysis Centre (DAIDAC) in Nigeria.

“Scammers now use AI to clone voices with chilling accuracy — impersonating friends, family, or colleagues to trick victims into handing over money or sensitive information,” Nwiti warns.

DON’T MISS THIS: Five tools to detect audio deepfakes

But as concerns grow over the misuse of AI in everyday life, Jonathan emphasizes that awareness is the first and most crucial step in protecting oneself from AI-powered voice cloning scams.

“You can’t protect yourself from something you don’t even know exists,” he warns, highlighting the urgent need for public education on the realities of voice cloning technology. “The first thing people need to know is that there is a possibility that their voices can now be cloned,” he added.

Safety Tips:

However, they suggested some ways to protect yourself and your loved ones from falling victim to this new form of scam and audio deepfakes as follow:

1. Know that voice cloning and deepfakes are real

AI voice cloning scams are no longer science fiction. With just a few voice samples often pulled from social media videos, voice notes, or phone calls, scammers can create convincing voice replicas that sound just like someone you know. You can’t guard against what you don’t know exists. Voice cloning is real, so is audio deepfakes – beware!

2. Watch out for emotional manipulation

Fraudsters typically create a sense of urgency — for example, claiming a loved one has been in an accident or kidnapped — and then demand immediate money transfers. Always pause before reacting.

3. Evaluate the audio or voice thoroughly

Before you interpret any kind of alleged audio leak or phone call… it is very important to ask yourself these questions: “Now that I’m aware that voices can be cloned, is this true? Is the alleged person likely to say something like that?” Always question the authenticity of controversial or unexpected audio leaks or strange phone calls.

4. Verify before you act

Don’t trust the voice or audio clip alone. Call the person(s) involved using their regular phone numbers, or contact any close friend or family member nearer to the person(s) to verify the situation. If it doesn’t check out, it’s likely a scam.

ALSO READ: AI voice technology used to create misleading videos on TikTok – Report

Also, you can check online to see if there is some news around the incident from credible media outlets. You may also search online for corroborating evidence or news reports that support or refute the audio.

5. Consult experts

It’s really good to share such audio clip or phony call recording with experts or fact-checking organisations like FactCheckHub, Africa Check, DAIDAC or Dubawa, among others. Fact-checkers have the tools and expertise to analyse such audio and are skilled to provide guidance (or caution) on the authenticity or otherwise of the clip or recording. You may also submit it to The FactCheckHub team via our mobile app to help you verify.

6. Avoid hasty conclusions

Dealing with audio can be very tricky because it’s not visual. Sometimes you have nothing to prove or to show. Be cautious with audio-only evidence; lack of visuals makes it easier to manipulate.

7. Set up family code words

Agree on secret phrases or code words with close family and friends that only you would know. If you ever receive a suspicious voice message or audio clip or phone call, ask for the code word before continuing the conversation.

8. Practice good digital hygiene

Limit the amount of personal voice content you share online. Be mindful of what you post publicly, and avoid oversharing details that could be used to make cloned phone calls or audio clip more convincing.

9. Report scams to appropriate local authorities

Even if you didn’t lose money, report the incident, scam or audio leak to the appropriate local authorities. For Nigerians, report such scams to the Nigeria Police Force-National Cybercrime Centre (NPF-NCCC) via their official X account or the Police HQ cybercrime unit, or the National Information Technology Development Agency (NITDA) through their Computer Emergency Readiness and Response Team (ngCERT) portal.​

These voice-cloning scams and audio deepfakes are sophisticated, but awareness and preparation can help you stay ahead. Talk to your friends and family about these tactics. Set up a plan. And remember: if it feels like a scam or too familiar voice, it probably is phony. Hang up and verify.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleLee Smolin: Quantum Gravity and Einstein’s Unfinished Revolution | Lex Fridman Podcast #79
Next Article When Google cofounder Larry Page predicted in 2000: ‘AI would be the ultimate version of Google’
Advanced AI Editor
  • Website

Related Posts

Exclusive: Krisp launches VIVA development kit to enhance accuracy for voice AI agents

July 16, 2025

Cockpit audio deepens mystery of Flight 171

July 12, 2025

LVO) Partners with Synervoz for Voice AI

July 4, 2025
Leave A Reply

Latest Posts

Morning Links for July 16, 2025

Advisers Barbara Guggenheim and Abigail Asher Sue Each Other

Justin Sun, Billionaire Banana Buyer, Buys $100 M. of Trump Memecoin

WeTransfer Changes Terms of Service After Criticism on Licensing

Latest Posts

OpenAI and Anthropic researchers decry ‘reckless’ safety culture at Elon Musk’s xAI

July 16, 2025

People Analytics and Hard Truths

July 16, 2025

Meta Picks Up Voice Startup Play AI as It Rebuilds Around GenAI

July 16, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • OpenAI and Anthropic researchers decry ‘reckless’ safety culture at Elon Musk’s xAI
  • People Analytics and Hard Truths
  • Meta Picks Up Voice Startup Play AI as It Rebuilds Around GenAI
  • Nvidia CEO Proclaims Chinese AI ‘World Class’ as H20 Chip Shipments Resume
  • China Tech ETF KWEB Surges on Big Nvidia AI Chip News

Recent Comments

  1. binance referral on Qwen 2.5 Coder and Qwen 3 Lead in Open Source LLM Over DeepSeek and Meta
  2. inscreva-se na binance on Your friend, girlfriend, therapist? What Mark Zuckerberg thinks about future of AI, Meta’s Llama AI app, more
  3. Duanepiems on Orange County Museum of Art Discusses Merger with UC Irvine
  4. binance on VAST Data Unlocks Real-Time, Multimodal AI Agent Intelligence With NVIDIA
  5. ⛏ Ticket- Operation 1,208189 BTC. Assure => https://graph.org/Payout-from-Blockchaincom-06-26?hs=53d5900f2f8db595bea7d1d205d9c375& ⛏ on Were RNNs All We Needed? (Paper Explained)

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.