Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

[2506.21997] Binned semiparametric Bayesian networks

IBM and SAP speed up cloud ERP migration with new hyperscaler option

MIT’s DNA sensor detects HPV or HIV for less than a dollar

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Amazon (Titan)
    • Anthropic (Claude 3)
    • Cohere (Command R)
    • Google DeepMind (Gemini)
    • IBM (Watsonx)
    • Inflection AI (Pi)
    • Meta (LLaMA)
    • OpenAI (GPT-4 / GPT-4o)
    • Reka AI
    • xAI (Grok)
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Facebook X (Twitter) Instagram
Advanced AI News
Industry Applications

Three Ways AI Can Weaken Your Cybersecurity

Advanced AI EditorBy Advanced AI EditorMay 1, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


(Source: inray27/Shutterstock)

Even before generative AI arrived on the scene, companies struggled to adequately secure their data, applications, and networks. In the never-ending cat-and-mouse game between the good guys and the bad guys, the bad guys win their share of battles. However, the arrival of GenAI brings new cybersecurity threats, and adapting to them is the only hope for survival.

There’s a wide variety of ways that AI and machine learning interact with cybersecurity, some of them good and some of them bad. But in terms of what’s new to the game, there are three patterns that stand out and deserve particular attention, including slopsquatting, prompt injection, and data poisoning.

Slopsquatting

“Slopsquatting” is a fresh AI take on “typosquatting,” where ne’er-do-wells spread malware to unsuspecting Web travelers who happen to mistype a URL. With slopsquatting, the bad guys are spreading malware through software development libraries that have been hallucinated by GenAI.

‘Slopsquatting’ is a new way to compromise AI systems. (Source: flightofdeath/shutterstock)

We know that large language models (LLMs) are prone to hallucinations. The tendency to create things out of whole cloth is not so much a bug of LLMs, but a feature that’s intrinsic to the way LLMs are developed. Some of these confabulations are humorous, but others can be serious. Slopsquatting falls into the latter category.

Large companies have reportedly recommended Pythonic libraries that have been hallucinated by GenAI. In a recent story in The Register, Bar Lanyado, security researcher at Lasso Security, explained that Alibaba recommended users install a fake version of the legitimate library called “huggingface-cli.”

While it is still unclear whether the bad guys have weaponized slopsquatting yet, GenAI’s tendency to hallucinate software libraries is perfectly clear. Last month, researchers published a paper that concluded that GenAI recommends Python and JavaScript libraries that don’t exist about one-fifth of the time.

“Our findings reveal that that the average percentage of hallucinated packages is at least 5.2% for commercial models and 21.7% for open-source models, including a staggering 205,474 unique examples of hallucinated package names, further underscoring the severity and pervasiveness of this threat,” the researchers wrote in the paper, titled “We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs.”

Out of the 205,00+ instances of package hallucination, the names appeared to be inspired by real packages 38% of the time, were the results of typos 13% of the time, and were completely fabricated 51% of the time.

Prompt Injection

Just when you thought it was safe to venture onto the Web, a new threat emerged: prompt injection.

Like the SQL injection attacks that plagued early Web 2.0 warriors who didn’t adequately validate database input fields, prompt injections involve the surreptitious injection of a malicious prompt into a GenAI-enabled application to achieve some goal, ranging from information disclosure and code execution rights.

A list of AI security threats from OWASP. (Source: Ben Lorica)

Mitigating these sorts of attacks is difficult because of the nature of GenAI applications. Instead of inspecting code for malicious entities, organizations must investigate the entirety of a model, including all of its weights. That’s not feasible in most situations, forcing them to adopt other techniques, says data scientist Ben Lorica.

“A poisoned checkpoint or a hallucinated/compromised Python package named in an LLM‑generated requirements file can give an attacker code‑execution rights inside your pipeline,” Lorica writes in a recent installment of his Gradient Flow newsletter. “Standard security scanners can’t parse multi‑gigabyte weight files, so additional safeguards are essential: digitally sign model weights, maintain a ‘bill of materials’ for training data, and keep verifiable training logs.”

A twist on the prompt injection attack was recently described by researchers at HiddenLayer, who call their technique “policy puppetry.”

“By reformulating prompts to look like one of a few types of policy files, such as XML, INI, or JSON, an LLM can be tricked into subverting alignments or instructions,” the researchers write in a summary of their findings. “As a result, attackers can easily bypass system prompts and any safety alignments trained into the models.”

The company says its approach to spoofing policy prompts enables it to bypass model alignment and produce outputs that are in clear violation of AI safety policies, including CBRN (Chemical, Biological, Radiological, and Nuclear), mass violence, self-harm and system prompt leakage.

Data Poisoning

Data lies at the heart of machine learning and AI models. So if a malicious user can inject, delete, or change the data that an organization uses to train an ML or AI model, then he or she can potentially skew the learning process and force the ML or AI model to generate an adverse result.

Symptoms and remediations of data poisoning. (Source: CrowdStrike)

A form of adversarial AI attacks, data poisoning or data manipulation, poses a serious risk to organizations that rely on AI. According to the security firm CrowdStrike, data poisoning is a risk to healthcare, finance, automotive, and HR use cases, and can even potentially be used to create backdoors.

“Because most AI models are constantly evolving, it can be difficult to detect when the dataset has been compromised,” the company says in a 2024 blog post. “Adversaries often make subtle–but–potent changes to the data that can go undetected. This is especially true if the adversary is an insider and therefore has in-depth information about the organization’s security measures and tools as well as their processes.”

Data poisoning can be either targeted or non-targeted. In either case, there are telltale signs that security professionals can look for that indicate whether their data has been compromised.

AI Attacks as Social Engineering

These three AI attack vectors–slopsquatting, prompt injection, and data poisoning–aren’t the only ways that cybercriminals can attack organizations via AI. But they are three avenues that AI-using organizations should be aware of to thwart the potential compromise of their systems.

Unless organizations take pains to adapt to the new ways that hackers can compromise systems through AI, they run the risk of becoming a victim. Because LLMs behave probabilistically instead of deterministically, they are much more liable to social engineering types of attacks than traditional systems, Lorica says.

“The result is a dangerous security asymmetry: exploit techniques spread rapidly through open-source repositories and Discord channels, while effective mitigations demand architectural overhauls, sophisticated testing protocols, and comprehensive staff retraining,” Lorica writes. “The longer we treat LLMs as ‘just another API,’ the wider that gap becomes.”

This article first appeared on BigDATAwire.

Related

About the author: Alex Woodie

Alex Woodie has written about IT as a technology journalist for more than a decade. He brings extensive experience from the IBM midrange marketplace, including topics such as servers, ERP applications, programming, databases, security, high availability, storage, business intelligence, cloud, and mobile enablement. He resides in the San Diego area.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleWhat Investors Need to Know
Next Article Tappan Celebrates Living Artists, Curates Community, With First New York Space
Advanced AI Editor
  • Website

Related Posts

Tesla bulls breakdown Musk and Trump feud, and investors will like what they say

July 1, 2025

The Public Sector’s AI Moment Has Arrived

July 1, 2025

Clean energy stocks rise after tax on solar, wind removed from Trump bill

July 1, 2025
Leave A Reply Cancel Reply

Latest Posts

Tim Blum’s New Dealer Model

Hauser & Wirth Owners Relocate to Switzerland as UK Wealthy Migrate

The Cerne Abbas Giant’s Genitalia was Almost Covered By Trees

Eiffel Tower Closes Summit as Heat Wave Scorches Paris

Latest Posts

[2506.21997] Binned semiparametric Bayesian networks

July 2, 2025

IBM and SAP speed up cloud ERP migration with new hyperscaler option

July 2, 2025

MIT’s DNA sensor detects HPV or HIV for less than a dollar

July 2, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • [2506.21997] Binned semiparametric Bayesian networks
  • IBM and SAP speed up cloud ERP migration with new hyperscaler option
  • MIT’s DNA sensor detects HPV or HIV for less than a dollar
  • No Such Thing As Artificial Intelligence | Two Minute Papers #60
  • Cohere Raises $40 Million in Series A Financing to Make Natural Language Processing Safe and Accessible to Any Business

Recent Comments

No comments to show.

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.