Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

A knockout blow for LLMs? – by Gary Marcus

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Slopsquatting & Vibe Coding Can Increase Risk of AI-Powered Attacks
TechRepublic

Slopsquatting & Vibe Coding Can Increase Risk of AI-Powered Attacks

Advanced AI BotBy Advanced AI BotApril 17, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Zoomed in monitor with programming.

Security researchers and developers are raising alarms over “slopsquatting,” a new form of supply chain attack that leverages AI-generated misinformation commonly known as hallucinations. As developers increasingly rely on coding tools like GitHub Copilot, ChatGPT, and DeepSeek, attackers are exploiting AI’s tendency to invent software packages, tricking users into downloading malicious content.

What is slopsquatting?

The term slopsquatting was originally coined by Seth Larson, a developer with the Python Software Foundation, and later popularized by tech security researcher Andrew Nesbitt. It refers to cases where attackers register software packages that don’t actually exist but are mistakenly suggested by AI tools; once live, these fake packages can contain harmful code.

If a developer installs one of these without verifying it — simply trusting the AI — they may unknowingly introduce malicious code into their project, giving hackers backdoor access to sensitive environments.

Unlike typosquatting, where malicious actors count on human spelling mistakes, slopsquatting relies entirely on AI’s flaws and developers misplaced trust in automated suggestions.

AI-hallucinated software packages are on the rise

This issue is more than theoretical. A recent joint study by researchers at the University of Texas at San Antonio, Virginia Tech, and the University of Oklahoma analyzed more than 576,000 AI-generated code samples from 16 large language models (LLMs). They found that nearly 1 in 5 packages suggested by AI didn’t exist.

“The average percentage of hallucinated packages is at least 5.2% for commercial models and 21.7% for open-source models, including a staggering 205,474 unique examples of hallucinated package names, further underscoring the severity and pervasiveness of this threat,” the study revealed.

Even more concerning, these hallucinated names weren’t random. In multiple runs using the same prompts, 43% of hallucinated packages consistently reappeared, showing how predictable these hallucinations can be. As explained by the security firm Socket, this consistency gives attackers a roadmap — they can monitor AI behavior, identify repeat suggestions, and register those package names before anyone else does.

The study also noted differences across models: CodeLlama 7B and 34B had the highest hallucination rates of over 30%; GPT-4 Turbo had the lowest rate at 3.59%.

Must-read security coverage

How vibe coding might increase this security risk

A growing trend called vibe coding, a term coined by AI researcher Andrej Karpathy, may worsen the issue. It refers to a workflow where developers describe what they want, and AI tools generate the code. This approach leans heavily on trust — developers often copy and paste AI output without double-checking everything.

In this environment, hallucinated packages become easy entry points for attackers, especially when developers skip manual review steps and rely solely on AI-generated suggestions.

How developers can protect themselves

To avoid falling victim to slopsquatting, experts recommend:

Manually verifying all package names before installation.
Using package security tools that scan dependencies for risks.
Checking for suspicious or brand-new libraries.
Avoiding copy-pasting install commands directly from AI suggestions.

Meanwhile, there is good news: some AI models are improving in self-policing. GPT-4 Turbo and DeepSeek, for instance, have shown they can detect and flag hallucinated packages in their own output with over 75% accuracy, according to early internal tests.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGlobal Venture Capital Transactions Plummet by 32%, Asia Accounts for Less Than 10% in Q1 AI Funding_global_The
Next Article MIT student sues federal government over termination of her international student record
Advanced AI Bot
  • Website

Related Posts

AI Benchmark Discrepancy Reveals Gaps in Performance Claims

April 22, 2025

Huawei Readies Ascend 920 Chip to Replace Restricted NVIDIA H20

April 21, 2025

‘AI Is Fundamentally Incompatible With Environmental Sustainability’

April 21, 2025
Leave A Reply Cancel Reply

Latest Posts

Jiaxing Train Station By Architect Ma Yansong Is A Model Of People-Centric, Green Urban Design

Midwestern Grotto Tradition Celebrated In Sheboygan, WI

Hugh Jackman And Sonia Friedman Boldly Bid To Democratize Theater

Men’s Swimwear Gets Casual At Miami Swim Week 2025

Latest Posts

A knockout blow for LLMs? – by Gary Marcus

June 7, 2025

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

June 7, 2025

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

June 7, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.