With the explosion of the use of AI to quickly build applications in the enterprise, IBM has introduced tools to help organizations wrangle AI systems and agents they may be unaware of.
IBM recently launched what it calls the “industry’s first software to unify agentic governance and security,” which integrates watsonx.governance and Guardium AI Security to help enterprises keep their AI systems — including agents — secured and responsible at scale, Heather Gentile, executive director of watsonx.governance, data and AI, told The New Stack.
Watsonx.governance is IBM’s end-to-end AI governance tool, and Guardium AI Security is its tool for securing AI models, data and usage.
“AI agents are set to revolutionize enterprise productivity, but the very benefits of AI agents can also present a challenge,” said Ritika Gunnar, IBM’s general manager for data and AI, in a statement. “When these autonomous systems aren’t properly governed or secured, they can carry steep consequences.”
The Shadow AI Challenge
Like its predecessor, shadow IT, shadow AI includes pockets of ungoverned technology usage inside an organization – in this case, AI systems. This represents a growing challenge as AI tools become more accessible and employees can now build autonomous systems with minimal technical expertise.
The Scale of the Problem
Recent research from Zoho’s ManageEngine shows that 60% of employees are using unapproved AI tools more than they were a year ago, with 93% admitting to inputting information into AI tools without approval. In addition, 32% of employees have entered confidential client data into AI tools without confirming company approval, while 37% have entered private, internal company data, the report said.
There is also a disconnect between IT leadership and employees, as 97% of IT decision-makers see significant risks in shadow AI, but 91% of employees surveyed said they perceive no risk, little risk or believe any risk is outweighed by the rewards.
Why Shadow AI Is Different
“Agents are the new hottest thing, and I think agents are more within employees’ reach than even generative AI [GenAI] was,” Gentile said. “They have the ability to build agents in just a few days through business applications like Salesforce or Workday.”
This accessibility sets shadow AI apart from traditional shadow IT. While shadow IT typically involves employees using unauthorized software or services, shadow AI enables them to create systems that can operate with minimal human oversight. Sales agents, customer service bots and data analysis tools can be deployed rapidly through familiar business applications, often without IT departments even knowing they exist, Gentile noted.
The autonomous nature of AI agents amplifies the risk. Unlike traditional software that requires direct human input, AI agents can make decisions, process data and take actions independently. When these systems operate outside governance frameworks, they can create blind spots that can have far-reaching consequences.
Mike Gualtieri, an analyst at Forrester Research, said enterprises need to be concerned about shadow AI because “sometimes, unwittingly, an employee or team might use an application that has an AI model embedded in its functionality. IT will need AI sniffers to figure out where LLMs [large language models] are hiding (in the shadows).”
The Business Impact
Moreover, organizations are getting pressure from multiple angles. ManageEngine’s research shows that 85% of IT decision-makers report employees are adopting AI tools faster than their IT teams can assess them. Meanwhile, 53% say employees’ use of personal devices for work-related AI tasks is creating security blind spots.
The consequences are real and measurable. IT decision-makers identify data leakage or exposure as the primary risk of shadow AI, affecting 63% of organizations. Additional concerns include intellectual property infringement, compliance violations and the potential for AI systems to make decisions that conflict with company policies or values.
“The biggest issue is privacy — sending company IP or personal data to an AI system that doesn’t have the appropriate protections or legal safeguards is going to cause problems,” David Mytton, CEO of developer security software provider Arcjet, told The New Stack. “Most people think this is about AI training on your private data — which is maybe part of it — but the real issue is following privacy frameworks. The right to delete your data, for example, might be impossible if you don’t know your employees are sending it to shadow AI tools.”
Meanwhile, Lawrence Hecht, The New Stack’s research director, noted: “For enterprises, the biggest issue is that business units (and not individuals) are starting to fund AI tools/services/software without the preapproval of IT. If past is prologue, in a year or two, IT will be forced to integrate the new tech into their existing stack, which can be a big headache.”
The Detection Challenge
Identifying shadow AI requires novel approaches. Traditional IT monitoring tools were not designed to detect AI agents that might be embedded in business applications or running in cloud environments. This has led to the development of specialized detection capabilities.
IBM has introduced new capabilities to Guardium AI Security through a collaboration with AllTrue.ai, including the ability to detect new AI use cases in cloud environments, code repositories and embedded systems — providing broad visibility and protection in an increasingly decentralized AI ecosystem, the company said. Once identified, IBM Guardium AI Security can automatically trigger appropriate governance workflows from watsonx.governance.
“We’re detecting shadow AI, similar to shadow IT, so if AI is not in registry or inventory, detecting the AI that’s running,” Gentile told The New Stack. “When shadow AI is detected, it can be brought into our governance technology and we can align it with the use case, so we can understand the purpose for why it’s running.”
In addition, recent updates to IBM Guardium AI Security also include automated red teaming to help enterprises detect and fix vulnerabilities and misconfigurations across AI use cases. And to help mitigate risks such as code injection, sensitive data exposure and data leakage, the tool enables users to define custom security policies that analyze both input and output prompts, IBM said. These features are available now in IBM Guardium AI Security, and their integration with watsonx.governance will roll out throughout the remainder of this year, Gentile said.
The integration supports users’ processes to validate compliance standards against 12 different frameworks, including the EU AI Act and ISO 42001.
“The future of AI depends on how well we secure it today. Embedding security from the start is essential to protecting data, supporting compliance obligations, and building lasting trust,” said Suja Viswesan, vice president of security and runtime products at IBM, in a statement.
Ban the Use of Unauthorized AI?
The solution is not simply to ban unauthorized AI use. Employees are turning to these tools for legitimate productivity gains, with summarizing notes or calls (55%), brainstorming (55%) and analyzing data or reports (47%) being the top tasks completed with shadow AI, the ManageEngine study showed.
“Shadow AI represents both the greatest governance risk and the biggest strategic opportunity in the enterprise,” said Ramprakash Ramamoorthy, director of AI research at ManageEngine, in a statement. “Organizations that will thrive are those that address the security threats and reframe shadow AI as a strategic indicator of genuine business needs.”
The key is establishing comprehensive governance frameworks that can scale with AI adoption, both IBM and ManageEngine note. This should include clear policies and enforcement, automated detection and integration, employee education and technical innovation.
Turning Challenge Into Opportunity
Organizations need to shift from reactive detection to proactive management.
“IT leaders must shift from playing defense to proactively building transparent, collaborative and secure AI ecosystems that employees feel empowered to use,” Ramamoorthy said.
This approach involves:
Integrating approved AI tools into standard workflows and business applications (recommended by 63% of IT decision-makers).
Establishing vetted and approved tool lists (55% recommendation).
“I agree that the biggest issue is that companies need clear, enforced policies,” Hecht said. “Most (91%) have an AI governance policy, but less than half say it is consistently enforced.”
The Road Ahead
As AI continues to evolve, shadow AI will likely become more sophisticated and harder to detect. And the emergence of agentic AI represents the next frontier in this challenge.
“One of the biggest challenges for security teams is translating incidents and compliance violations into quantifiable business risk,” said Jennifer Glenn, research director for the IDC Security and Trust Group, in a statement. “The rapid adoption of AI and agentic AI amplifies this issue. Unifying AI governance with AI security gives organizations the necessary context to find and prioritize risks.”
Meanwhile, The New Stack’s Hecht noted that ManageEngine/Zoho markets to small- to medium-sized businesses “that are less likely than larger companies to have dedicated IT staffing and policies to make sure bring-your-own tech isn’t used.”
Moreover, “[ManageEngine] count using ChatGPT, Perplexity, etc., as unauthorized use. Imagine if using a Google search engine is unauthorized. That’s ridiculous,” he said. “Of the top risks identified by IT leaders, only data leakage and IP infringement are directly related to what IT leaders should care about.”
As IBM’s and ManageEngine’s efforts reveal, the goal is not to eliminate shadow AI entirely, but to transform it from a hidden liability into a visible, manageable, and strategic asset.
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.
SUBSCRIBE
Group
Created with Sketch.