Recent IBM research finds an “AI oversight gap” among organizations that had experienced data breaches.
“Consider this: a staggering 97% of breached organizations that experienced an AI-related security incident say they lacked proper AI access controls,” the company said in promoting findings from its Cost of a Data Breach Report.
In addition, 63% of the surveyed organizations said they had no artificial intelligence (AI) governance policies in place to manage AI or keep workers from using “shadow AI,” IBM said. The findings were released in late July and flagged Monday (Aug. 18) in a report by CPO Magazine.
“This AI oversight gap is carrying heavy financial and operational costs,” the company’s announcement added. “The report shows that having a high level of shadow AI—where workers download or use unapproved internet-based AI tools—added an extra $670,000 to the global average breach cost.”
In addition, AI-related breaches carried a ripple effect, leading to “broad data compromise and operational disruption,” which can keep organizations from processing sales orders, delivering customer service and managing supply chains.
The report also contains some positive news: average global data breach costs have declined for the first time in five years, from $4.88 million to $4.44 million, a 9% decrease.
“The catalyst? Faster breach containment driven by AI-powered defenses,” the company said, with organizations able to identify and contain a breach within a mean time of 241 days, the lowest that figure has been in nine years.
Research by PYMNTS Intelligence has found that a growing number of companies are implementing AI-powered tools for cybersecurity protections.
The share of chief operating officers (COOs) who said that their companies had implemented such measures was at 55% in August of last year, compared to 17% in May.
Those COOs, PYMNTS wrote earlier this year, “are moving to proactive, AI-driven frameworks — and away from reactive security approaches — because the new AI-based systems can identify fraudulent activities, detect anomalies and provide real-time threat assessments.”
More recently, PYMNTS examined the use of agentic AI in cybersecurity efforts, and the risks that come with it.
Those systems, by definition, operate independently, thus introducing new challenges for governance and compliance. Who’s at fault if an AI mistakenly flags a critical system and shuts it down? What happens if the AI fails to spot a breach?
“This isn’t a technical upgrade; it’s a governance revolution,” Kathryn McCall, chief legal and compliance officer at Trustly, said in a June interview with PYMNTS.