IBM’s Cost of a Breach Report shows that global costs are down, but US costs are up. More than anything it shows the arrival of a new emerging influence: the effect of AI in both attack and defense.
The global average cost of a breach fell to $4.44 million (the first decline in five years), but the average US cost rose to a record $10.22 million. The lifecycle of a breach (dwell time plus restoration time) fell to 241 days – a record low and 17 days lower than the previous year.
The higher cost of a US breach will have little to do with relative regional levels of security or even the influence of AI. “While the U.S. has adopted AI-driven defenses at a slightly higher rate, organizations in the US. continue to experience the highest data breach costs year after year,” explains Kevin Albano, associate partner at IBM X-Force Intel.
“The disparity is influenced by several factors, including a 14% year-over-year jump in detection and escalation costs, driven in part by higher labor costs. US organizations also reported paying higher regulatory fines, further compounding the overall cost burden.”
The standout takeaway from this year’s report (PDF) is that, for good and evil, AI is here – and criminals seem to be taking it more seriously than defenders. AI is a new and high value target, and while AI breaches are still only a relatively small portion of the overall number of breaches, they will undoubtedly increase as AI usage increases.
AI is used as a target, and as an attack enabler and defense solution. It is a high value target. It improves the scale and sophistication of attacks but can also be used to increase the speed of attack detection. Noticeably, companies that employ AI in their defense, decrease the cost of any breach. But equally noticeable, companies are weak in securing their own AI models.
Thirteen percent of breaches involved AI models or applications, and 97% of those breaches had no access controls. Sixty percent of them led to compromised data and 31% led to operational disruption. Security and governance are taking a back seat in AI implementation.
The lack of access control is surprising since the prevention of unauthorized access is the fount of all security. The failure is primarily caused by the desire to implement AI, for its potential to automate functions and reduce costs, as quickly as possible. “AI’s complexity and novelty challenges organizations in implementing effective access controls, as security best practices for AI systems are still evolving in this relatively new field,” suggests Albano.
Shadow AI is an important element of this. Extensive use leads to increased breach cost, and the loss of more PII and IP. The adage of not being able to secure what you cannot see remains true.
Certainly, reliance on AI’s inbuilt guardrails to provide a line of defense is false security. Many AI breaches were supply chain incidents (30%), involving compromised apps, APIs and plug-ins. However, direct manipulation of AI bots occupies the next three spots: prompt injection (17%), model evasion (21%), and model inversion 24%). All three involve the extraction of data or information that the guardrails should prevent. Prompt injection was the earliest tactic – a direct attempt to trick the guardrails. But as the guardrails have improved over time, this direct attack has become more difficult.
Attackers have switched to context manipulation. Context is the previous questions ‘remembered’ by the AI to enable it to handle a conversation. Manipulation builds a conversation without ever directly delivering a new request that would trigger the guardrails. Model inversion and model evasion are the two primary examples of manipulation.
“Model inversion focuses on reconstructing training data, model evasion aims to manipulate inputs to cause incorrect outputs, and prompt injection involves altering the prompts to influence the AI’s behavior,” explains Albano.
Most breaches target customer PII, comprising 53% of stolen or compromised data. This year, phishing replaced stolen credentials as the most common initial attack vector – quite possibly through the increasing use of AI.
“Phishing attacks caused 16% of data breaches, with each costing an average of $4.8 million. Generative AI now enables attackers to create convincing phishing emails in just 5 minutes – down from 16 hours previously,” says Albano.
“These phishing emails typically deploy infostealers that harvest passwords, browser cookies, autofill data, keystrokes, and screenshots to steal user credentials.” Infostealers have become the backbone of cybercrime, feeding the growth in fraud (which is also but separately aggravated by criminal use of AI) .
IBM uses the same method for calculating the cost of a breach each year. “Researchers calculate the cost of a data breach using four process-related activities: detection and escalation, notification, post-breach response and lost business,” explains IBM.
“The research excludes very small and very large breaches. The data breaches examined in the 2025 report ranged in size between 2,960 and 113,620 compromised records. The researchers used activity-based costing, which identifies activities and assigns a cost according to actual use.”
The result is an average cost of a breach. It may not be 100% accurate for all breaches because it cannot include breached companies that don’t report their breaches or losses. However, by using the same research formula each year it provides a valid and comparable figure that shows trends. This is the real strength of the report. It demonstrates the current state of the continuing struggle between attackers and defenders, while the detailed analysis explains what is happening – such as this year’s emergence of the effect of AI on cybersecurity.
Related: Cost of Data Breach in 2024: $4.88 Million, Says Latest IBM Study
Related: Allianz Life Data Breach Impacts Most of 1.4 Million US Customers
Related: 750,000 Impacted by Data Breach at The Alcohol & Drug Testing Service
Related: Marks & Spencer Expects Ransomware Attack to Cost $400 Million