Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Creating a Thinking Multimodal Creative Engine_and_model_image

The Hybrid AI Law Firm – Artificial Lawyer

HumanAgencyBench: Scalable Evaluation of Human Agency Support in AI Assistants – Takara TLDR

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
IBM

IBM Tackles Shadow AI: An Enterprise Blind Spot

By Advanced AI EditorJuly 18, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


With the explosion of the use of AI to quickly build applications in the enterprise, IBM has introduced tools to help organizations wrangle AI systems and agents they may be unaware of.

IBM recently launched what it calls the “industry’s first software to unify agentic governance and security,” which integrates watsonx.governance and Guardium AI Security to help enterprises keep their AI systems — including agents — secured and responsible at scale, Heather Gentile, executive director of watsonx.governance, data and AI, told The New Stack.

Watsonx.governance is IBM’s end-to-end AI governance tool, and Guardium AI Security is its tool for securing AI models, data and usage.

“AI agents are set to revolutionize enterprise productivity, but the very benefits of AI agents can also present a challenge,” said Ritika Gunnar, IBM’s general manager for data and AI, in a statement. “When these autonomous systems aren’t properly governed or secured, they can carry steep consequences.”

The Shadow AI Challenge

Like its predecessor, shadow IT, shadow AI includes pockets of ungoverned technology usage inside an organization – in this case, AI systems. This represents a growing challenge as AI tools become more accessible and employees can now build autonomous systems with minimal technical expertise.

The Scale of the Problem

Recent research from Zoho’s ManageEngine shows that 60% of employees are using unapproved AI tools more than they were a year ago, with 93% admitting to inputting information into AI tools without approval. In addition, 32% of employees have entered confidential client data into AI tools without confirming company approval, while 37% have entered private, internal company data, the report said.

There is also a disconnect between IT leadership and employees, as 97% of IT decision-makers see significant risks in shadow AI, but 91% of employees surveyed said they perceive no risk, little risk or believe any risk is outweighed by the rewards.

Why Shadow AI Is Different

“Agents are the new hottest thing, and I think agents are more within employees’ reach than even generative AI [GenAI] was,” Gentile said. “They have the ability to build agents in just a few days through business applications like Salesforce or Workday.”

This accessibility sets shadow AI apart from traditional shadow IT. While shadow IT typically involves employees using unauthorized software or services, shadow AI enables them to create systems that can operate with minimal human oversight. Sales agents, customer service bots and data analysis tools can be deployed rapidly through familiar business applications, often without IT departments even knowing they exist, Gentile noted.

The autonomous nature of AI agents amplifies the risk. Unlike traditional software that requires direct human input, AI agents can make decisions, process data and take actions independently. When these systems operate outside governance frameworks, they can create blind spots that can have far-reaching consequences.

Mike Gualtieri, an analyst at Forrester Research, said enterprises need to be concerned about shadow AI because “sometimes, unwittingly, an employee or team might use an application that has an AI model embedded in its functionality. IT will need AI sniffers to figure out where LLMs [large language models] are hiding (in the shadows).”

The Business Impact

Moreover, organizations are getting pressure from multiple angles. ManageEngine’s research shows that 85% of IT decision-makers report employees are adopting AI tools faster than their IT teams can assess them. Meanwhile, 53% say employees’ use of personal devices for work-related AI tasks is creating security blind spots.

The consequences are real and measurable. IT decision-makers identify data leakage or exposure as the primary risk of shadow AI, affecting 63% of organizations. Additional concerns include intellectual property infringement, compliance violations and the potential for AI systems to make decisions that conflict with company policies or values.

“The biggest issue is privacy — sending company IP or personal data to an AI system that doesn’t have the appropriate protections or legal safeguards is going to cause problems,” David Mytton, CEO of developer security software provider Arcjet, told The New Stack. “Most people think this is about AI training on your private data — which is maybe part of it — but the real issue is following privacy frameworks. The right to delete your data, for example, might be impossible if you don’t know your employees are sending it to shadow AI tools.”

Meanwhile, Lawrence Hecht, The New Stack’s research director, noted: “For enterprises, the biggest issue is that business units (and not individuals) are starting to fund AI tools/services/software without the preapproval of IT. If past is prologue, in a year or two, IT will be forced to integrate the new tech into their existing stack, which can be a big headache.”

The Detection Challenge

Identifying shadow AI requires novel approaches. Traditional IT monitoring tools were not designed to detect AI agents that might be embedded in business applications or running in cloud environments. This has led to the development of specialized detection capabilities.

IBM has introduced new capabilities to Guardium AI Security through a collaboration with AllTrue.ai, including the ability to detect new AI use cases in cloud environments, code repositories and embedded systems — providing broad visibility and protection in an increasingly decentralized AI ecosystem, the company said. Once identified, IBM Guardium AI Security can automatically trigger appropriate governance workflows from watsonx.governance.

“We’re detecting shadow AI, similar to shadow IT, so if AI is not in registry or inventory, detecting the AI that’s running,” Gentile told The New Stack. “When shadow AI is detected, it can be brought into our governance technology and we can align it with the use case, so we can understand the purpose for why it’s running.”

In addition, recent updates to IBM Guardium AI Security also include automated red teaming to help enterprises detect and fix vulnerabilities and misconfigurations across AI use cases. And to help mitigate risks such as code injection, sensitive data exposure and data leakage, the tool enables users to define custom security policies that analyze both input and output prompts, IBM said. These features are available now in IBM Guardium AI Security, and their integration with watsonx.governance will roll out throughout the remainder of this year, Gentile said.

The integration supports users’ processes to validate compliance standards against 12 different frameworks, including the EU AI Act and ISO 42001.

“The future of AI depends on how well we secure it today. Embedding security from the start is essential to protecting data, supporting compliance obligations, and building lasting trust,” said Suja Viswesan, vice president of security and runtime products at IBM, in a statement.

Ban the Use of Unauthorized AI?

The solution is not simply to ban unauthorized AI use. Employees are turning to these tools for legitimate productivity gains, with summarizing notes or calls (55%), brainstorming (55%) and analyzing data or reports (47%) being the top tasks completed with shadow AI, the ManageEngine study showed.

“Shadow AI represents both the greatest governance risk and the biggest strategic opportunity in the enterprise,” said Ramprakash Ramamoorthy, director of AI research at ManageEngine, in a statement. “Organizations that will thrive are those that address the security threats and reframe shadow AI as a strategic indicator of genuine business needs.”

The key is establishing comprehensive governance frameworks that can scale with AI adoption, both IBM and ManageEngine note. This should include clear policies and enforcement, automated detection and integration, employee education and technical innovation.

Turning Challenge Into Opportunity

Organizations need to shift from reactive detection to proactive management.

“IT leaders must shift from playing defense to proactively building transparent, collaborative and secure AI ecosystems that employees feel empowered to use,” Ramamoorthy said.

This approach involves:

Integrating approved AI tools into standard workflows and business applications (recommended by 63% of IT decision-makers).
Establishing vetted and approved tool lists (55% recommendation).

“I agree that the biggest issue is that companies need clear, enforced policies,” Hecht said. “Most (91%) have an AI governance policy, but less than half say it is consistently enforced.”

The Road Ahead

As AI continues to evolve, shadow AI will likely become more sophisticated and harder to detect. And the emergence of agentic AI represents the next frontier in this challenge.

“One of the biggest challenges for security teams is translating incidents and compliance violations into quantifiable business risk,” said Jennifer Glenn, research director for the IDC Security and Trust Group, in a statement. “The rapid adoption of AI and agentic AI amplifies this issue. Unifying AI governance with AI security gives organizations the necessary context to find and prioritize risks.”

Meanwhile, The New Stack’s Hecht noted that ManageEngine/Zoho markets to small- to medium-sized businesses “that are less likely than larger companies to have dedicated IT staffing and policies to make sure bring-your-own tech isn’t used.”

Moreover, “[ManageEngine] count using ChatGPT, Perplexity, etc., as unauthorized use. Imagine if using a Google search engine is unauthorized. That’s ridiculous,” he said. “Of the top risks identified by IT leaders, only data leakage and IP infringement are directly related to what IT leaders should care about.”

As IBM’s and ManageEngine’s efforts reveal, the goal is not to eliminate shadow AI entirely, but to transform it from a hidden liability into a visible, manageable, and strategic asset.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.

SUBSCRIBE

Group
Created with Sketch.

Darryl K. Taft covers DevOps, software development tools and developer-related issues from his office in the Baltimore area. He has more than 25 years of experience in the business and is always looking for the next scoop. He has worked…

Read more from Darryl K. Taft



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleEvaluating generative AI models with Amazon Nova LLM-as-a-Judge on Amazon SageMaker AI
Next Article Adobe Firefly adds AI sound & video tools to boost creativity
Advanced AI Editor
  • Website

Related Posts

Amazon, IBM, and Dell helped build China’s surveillance state brick by brick, investigation finds

September 9, 2025

IBM vs. QCOM: Which Tech Stock Deserves a Spot in Your Portfolio Now? – September 9, 2025

September 9, 2025

IBM Declines 8.6% in 3 Months: Should You Rethink the Stock? – September 8, 2025

September 8, 2025

Comments are closed.

Latest Posts

Christie’s Will Auction The First Calculating Machine In History

The Art Market Isn’t Dying. The Way We Write About It Might Be.

Banksy Mural of Judge Beating Protestor Removed by Courts Service

Ralph Rugoff to Leave London’s Hayward Gallery After 20 Years

Latest Posts

Creating a Thinking Multimodal Creative Engine_and_model_image

September 11, 2025

The Hybrid AI Law Firm – Artificial Lawyer

September 11, 2025

HumanAgencyBench: Scalable Evaluation of Human Agency Support in AI Assistants – Takara TLDR

September 11, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Creating a Thinking Multimodal Creative Engine_and_model_image
  • The Hybrid AI Law Firm – Artificial Lawyer
  • HumanAgencyBench: Scalable Evaluation of Human Agency Support in AI Assistants – Takara TLDR
  • Anthropic Claude AI Experiences Outage, Developers Reflect on AI Tool Dependency and API Stability_the_again_model
  • ‘Only around 28% companies are scaling up AI in a big way’, says Sandip Patel, managing director, IBM India – Industry News

Recent Comments

  1. hujefsip on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. Angelia on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. Victorcaw on Foundation AI: Cisco launches AI model for integration in security applications
  4. whimsyturtle6Nalay on Foundation AI: Cisco launches AI model for integration in security applications
  5. Rogerelose on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.