Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Beijing Is Using Soft Power to Gain Global Dominance

Alibaba previews its first AI-powered glasses, joining China’s heated smart wearable race

Monitor AI’s Decision-Making Black Box: Here’s Why

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
TechRepublic

Slopsquatting & Vibe Coding Can Increase Risk of AI-Powered Attacks

By Advanced AI EditorApril 17, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Zoomed in monitor with programming.

Security researchers and developers are raising alarms over “slopsquatting,” a new form of supply chain attack that leverages AI-generated misinformation commonly known as hallucinations. As developers increasingly rely on coding tools like GitHub Copilot, ChatGPT, and DeepSeek, attackers are exploiting AI’s tendency to invent software packages, tricking users into downloading malicious content.

What is slopsquatting?

The term slopsquatting was originally coined by Seth Larson, a developer with the Python Software Foundation, and later popularized by tech security researcher Andrew Nesbitt. It refers to cases where attackers register software packages that don’t actually exist but are mistakenly suggested by AI tools; once live, these fake packages can contain harmful code.

If a developer installs one of these without verifying it — simply trusting the AI — they may unknowingly introduce malicious code into their project, giving hackers backdoor access to sensitive environments.

Unlike typosquatting, where malicious actors count on human spelling mistakes, slopsquatting relies entirely on AI’s flaws and developers misplaced trust in automated suggestions.

AI-hallucinated software packages are on the rise

This issue is more than theoretical. A recent joint study by researchers at the University of Texas at San Antonio, Virginia Tech, and the University of Oklahoma analyzed more than 576,000 AI-generated code samples from 16 large language models (LLMs). They found that nearly 1 in 5 packages suggested by AI didn’t exist.

“The average percentage of hallucinated packages is at least 5.2% for commercial models and 21.7% for open-source models, including a staggering 205,474 unique examples of hallucinated package names, further underscoring the severity and pervasiveness of this threat,” the study revealed.

Even more concerning, these hallucinated names weren’t random. In multiple runs using the same prompts, 43% of hallucinated packages consistently reappeared, showing how predictable these hallucinations can be. As explained by the security firm Socket, this consistency gives attackers a roadmap — they can monitor AI behavior, identify repeat suggestions, and register those package names before anyone else does.

The study also noted differences across models: CodeLlama 7B and 34B had the highest hallucination rates of over 30%; GPT-4 Turbo had the lowest rate at 3.59%.

Must-read security coverage

How vibe coding might increase this security risk

A growing trend called vibe coding, a term coined by AI researcher Andrej Karpathy, may worsen the issue. It refers to a workflow where developers describe what they want, and AI tools generate the code. This approach leans heavily on trust — developers often copy and paste AI output without double-checking everything.

In this environment, hallucinated packages become easy entry points for attackers, especially when developers skip manual review steps and rely solely on AI-generated suggestions.

How developers can protect themselves

To avoid falling victim to slopsquatting, experts recommend:

Manually verifying all package names before installation.
Using package security tools that scan dependencies for risks.
Checking for suspicious or brand-new libraries.
Avoiding copy-pasting install commands directly from AI suggestions.

Meanwhile, there is good news: some AI models are improving in self-policing. GPT-4 Turbo and DeepSeek, for instance, have shown they can detect and flag hallucinated packages in their own output with over 75% accuracy, according to early internal tests.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGlobal Venture Capital Transactions Plummet by 32%, Asia Accounts for Less Than 10% in Q1 AI Funding_global_The
Next Article MIT student sues federal government over termination of her international student record
Advanced AI Editor
  • Website

Related Posts

AI Benchmark Discrepancy Reveals Gaps in Performance Claims

April 22, 2025

Huawei Readies Ascend 920 Chip to Replace Restricted NVIDIA H20

April 21, 2025

‘AI Is Fundamentally Incompatible With Environmental Sustainability’

April 21, 2025
Leave A Reply

Latest Posts

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Street Fighter 6 Community Rocked by AI Art Controversy

Latest Posts

Beijing Is Using Soft Power to Gain Global Dominance

July 27, 2025

Alibaba previews its first AI-powered glasses, joining China’s heated smart wearable race

July 27, 2025

Monitor AI’s Decision-Making Black Box: Here’s Why

July 27, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Beijing Is Using Soft Power to Gain Global Dominance
  • Alibaba previews its first AI-powered glasses, joining China’s heated smart wearable race
  • Monitor AI’s Decision-Making Black Box: Here’s Why
  • ChatGPT therapy conversations may not be private, warns OpenAI CEO Sam Altman
  • For Now, AI Helps IBM’s Bottom Line More Than Its Top Line

Recent Comments

  1. Rejestracja on Online Education – How I Make My Videos
  2. Anonymous on AI, CEOs, and the Wild West of Streaming
  3. MichaelWinty on Local gov’t reps say they look forward to working with Thomas
  4. 4rabet mirror on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  5. Janine Bethel on OpenAI research reveals that simply teaching AI a little ‘misinformation’ can turn it into an entirely unethical ‘out-of-the-way AI’

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.