Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

NVIDIA, AI & Quantum Leaders Drive Health Tech: 2 Stocks to Buy

MIT’s newest computer vision algorithm identifies images down to the pixel

Key Priorities for Safe and Responsible Adoption

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Voice/Audio Generation

How to protect yourself from AI voice cloning scams, deepfakes

By Advanced AI EditorMay 29, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


The terrifying reality of a growing scam where fraudsters now use AI-generated voice clones to impersonate relatives and extort money from unsuspecting victims is rapidly gaining momentum. With just a short audio clip pulled from your social media posts or past phone calls, scammers can mimic your voice convincingly.

In today’s digital world, deepfake technology and voice cloning are rapidly evolving, courtesy of artificial intelligence (AI). With increasing innovation in AI, any misuse could result into manipulation of digital content and thus, leading to fraud or loss of and damage to personal or organisational reputation.

“Deepfake and voice cloning are interconnected yet distinct concepts,” says Mert Çıkan, Product Owner at SESTEK.

“Deepfake technology involves employing advanced AI algorithms to produce media—videos, audios, images, and text—that deceptively appear realistic, despite being fabricated or altered. Voice cloning, on the other hand, is a unique subset within deepfake technology, focusing on audio manipulation. With this technique audio content can be synthesized that sounds like a specific individual,” he says.

In recent years, there have been multiple cases of audio deepfakes and voice cloning incidents in Nigeria and across the world.

For instance, in April 2023, an audio clip surfaced online purportedly featuring Labour Party presidential candidate, Peter Obi, referring to the 2023 election as a “religious war” in an alleged conversation with the presiding Bishop of Winners Chapel International, David Oyedepo. Obi denied the authenticity of the “Yes Daddy” audio leak, labeling it as “fake” and “doctored,” and suggested it was an attempt to discredit him prior to Nigeria’s 2023 general election.

READ THIS: How to protect yourself from online scams – Study

Another audio deepfake circulated online, allegedly capturing a conversation between former Nigerian president, Olusegun Obasanjo; Nigerian musician, Charly Boy and former Cross River State governor, Donald Duke. The audio clip suggested that Obasanjo was urging protests against the 2023 election results. However, a fact-check by TheCable concluded that the audio was doctored and did not meet the authenticity threshold when compared with verified samples of Obasanjo’s voice.​

On the global scene, a man in Los Angeles, United States was swindled out of $25,000 after fraudsters used AI to replicate his son’s voice, convincing him of a fabricated emergency. ​

A 2023 Guardian newspaper report revealed that AI scams now explore cloning to defraud consumers, following a 264% increase in attacks. The report quoted the Southern African Fraud Prevention Service (SAFPS) saying that impersonation attacks increased by 264 per cent for the first five months of 2023 compared to 2021. The report also highlighted the increasing use of AI in fraudulent activities, noting that​ “cybercriminals are leveraging Artificial Intelligence (AI) through cloning to defraud unsuspecting consumers.”

The FactCheckHub spoke with at least two leading experts combatting information disorder in Africa – Lee Nwiti, Chief Editor at Africa Check in South Africa and Silas Jonathan, Digital Investigation Manager at the Digital Technology, Artificial Intelligence, and Information Disorder Analysis Centre (DAIDAC) in Nigeria.

“Scammers now use AI to clone voices with chilling accuracy — impersonating friends, family, or colleagues to trick victims into handing over money or sensitive information,” Nwiti warns.

DON’T MISS THIS: Five tools to detect audio deepfakes

But as concerns grow over the misuse of AI in everyday life, Jonathan emphasizes that awareness is the first and most crucial step in protecting oneself from AI-powered voice cloning scams.

“You can’t protect yourself from something you don’t even know exists,” he warns, highlighting the urgent need for public education on the realities of voice cloning technology. “The first thing people need to know is that there is a possibility that their voices can now be cloned,” he added.

Safety Tips:

However, they suggested some ways to protect yourself and your loved ones from falling victim to this new form of scam and audio deepfakes as follow:

1. Know that voice cloning and deepfakes are real

AI voice cloning scams are no longer science fiction. With just a few voice samples often pulled from social media videos, voice notes, or phone calls, scammers can create convincing voice replicas that sound just like someone you know. You can’t guard against what you don’t know exists. Voice cloning is real, so is audio deepfakes – beware!

2. Watch out for emotional manipulation

Fraudsters typically create a sense of urgency — for example, claiming a loved one has been in an accident or kidnapped — and then demand immediate money transfers. Always pause before reacting.

3. Evaluate the audio or voice thoroughly

Before you interpret any kind of alleged audio leak or phone call… it is very important to ask yourself these questions: “Now that I’m aware that voices can be cloned, is this true? Is the alleged person likely to say something like that?” Always question the authenticity of controversial or unexpected audio leaks or strange phone calls.

4. Verify before you act

Don’t trust the voice or audio clip alone. Call the person(s) involved using their regular phone numbers, or contact any close friend or family member nearer to the person(s) to verify the situation. If it doesn’t check out, it’s likely a scam.

ALSO READ: AI voice technology used to create misleading videos on TikTok – Report

Also, you can check online to see if there is some news around the incident from credible media outlets. You may also search online for corroborating evidence or news reports that support or refute the audio.

5. Consult experts

It’s really good to share such audio clip or phony call recording with experts or fact-checking organisations like FactCheckHub, Africa Check, DAIDAC or Dubawa, among others. Fact-checkers have the tools and expertise to analyse such audio and are skilled to provide guidance (or caution) on the authenticity or otherwise of the clip or recording. You may also submit it to The FactCheckHub team via our mobile app to help you verify.

6. Avoid hasty conclusions

Dealing with audio can be very tricky because it’s not visual. Sometimes you have nothing to prove or to show. Be cautious with audio-only evidence; lack of visuals makes it easier to manipulate.

7. Set up family code words

Agree on secret phrases or code words with close family and friends that only you would know. If you ever receive a suspicious voice message or audio clip or phone call, ask for the code word before continuing the conversation.

8. Practice good digital hygiene

Limit the amount of personal voice content you share online. Be mindful of what you post publicly, and avoid oversharing details that could be used to make cloned phone calls or audio clip more convincing.

9. Report scams to appropriate local authorities

Even if you didn’t lose money, report the incident, scam or audio leak to the appropriate local authorities. For Nigerians, report such scams to the Nigeria Police Force-National Cybercrime Centre (NPF-NCCC) via their official X account or the Police HQ cybercrime unit, or the National Information Technology Development Agency (NITDA) through their Computer Emergency Readiness and Response Team (ngCERT) portal.​

These voice-cloning scams and audio deepfakes are sophisticated, but awareness and preparation can help you stay ahead. Talk to your friends and family about these tactics. Set up a plan. And remember: if it feels like a scam or too familiar voice, it probably is phony. Hang up and verify.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleLee Smolin: Quantum Gravity and Einstein’s Unfinished Revolution | Lex Fridman Podcast #79
Next Article When Google cofounder Larry Page predicted in 2000: ‘AI would be the ultimate version of Google’
Advanced AI Editor
  • Website

Related Posts

Voices Launches the First-Ever Character Audio Dataset for Voice AI Training

September 10, 2025

ElevenLabs & Burda: Strategic Partnership for Audio AI and Voice Agent Solutions

September 9, 2025

ARN takes audience temperature on AI voices in radio

September 8, 2025
Leave A Reply

Latest Posts

Sally Mann Says Her Black Men Photos Are ‘Problematic’ in Hindsight

NeueHouse, a Hot Spot for Art Events, Files for Bankruptcy

National Gallery and Tate Have ‘Bad Blood’—and More Art News

Christie’s Will Auction The First Calculating Machine In History

Latest Posts

NVIDIA, AI & Quantum Leaders Drive Health Tech: 2 Stocks to Buy

September 11, 2025

MIT’s newest computer vision algorithm identifies images down to the pixel

September 11, 2025

Key Priorities for Safe and Responsible Adoption

September 11, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • NVIDIA, AI & Quantum Leaders Drive Health Tech: 2 Stocks to Buy
  • MIT’s newest computer vision algorithm identifies images down to the pixel
  • Key Priorities for Safe and Responsible Adoption
  • Tesla lands regulatory green light for Robotaxi testing in new state
  • RewardDance: Reward Scaling in Visual Generation – Takara TLDR

Recent Comments

  1. private jets charters on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. राजा ने पैर फैलाकर on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. Gregorydal on Chinese Firms Have Placed $16B in Orders for Nvidia’s (NVDA) H20 AI Chips
  4. Sidneysag on Chinese Firms Have Placed $16B in Orders for Nvidia’s (NVDA) H20 AI Chips
  5. Josephmop on Chinese Firms Have Placed $16B in Orders for Nvidia’s (NVDA) H20 AI Chips

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.