Skip to content

'Report discovers that 'Shadow AI' elevates expenses associated with data breaches'

Businesses struggle to secure their AI systems from intrusion, inadvertently causing larger data leaks, IBM's latest findings reveal.

"A study reveals that the usage of 'Shadow AI' raises the costs associated with data breaches"
"A study reveals that the usage of 'Shadow AI' raises the costs associated with data breaches"

'Report discovers that 'Shadow AI' elevates expenses associated with data breaches'

In a recent study, IBM has highlighted the growing threat of cyberattacks on Artificial Intelligence (AI) platforms, with compromised third-party components in the AI supply chain and poor AI security hygiene identified as the primary origin points for these costly breaches.

The report, based on 470 interviews with individuals at 600 organizations that suffered a data breach between March 2024 and February 2025, reveals that only 13% of organizations reported breaches involving AI tools, but a staggering 97% of those organizations lacked proper AI access controls.

One of the key findings of the report is the increased use of generative AI in data breaches. AI-generated phishing (37%) and deepfake impersonation attacks (35%) are becoming common methods used by hackers. Previously reported, generative AI can significantly reduce the time needed to write a convincing phishing email, from 16 hours to just five minutes.

The report also emphasises the risks from rapid AI adoption without adequate security oversight. Unmonitored artificial intelligence tools, often referred to as 'shadow AI', contributed to 20% of breaches and added an average of $670,000 to breach costs.

Hackers frequently access AI platforms through compromised apps, APIs, or plug-ins, highlighting the importance of basic security protections. Sixty-two percent of organizations with AI governance policies lack strong access controls on their AI tools. This lack of control increases the risk, as organisations that fail to implement proper AI governance and controls lack zero-trust principles like network segmentation.

Corporate stakeholders are seeking a better understanding of the risk calculus of their technology stacks, with the question of whether they are a target being a primary concern. The report serves as a warning about the potential consequences of not taking AI security seriously enough.

After hacking a company's AI platform, hackers often compromise other data stores (60% of cases) and occasionally cause operational disruptions to infrastructure (31% of cases). The report indicates that security leaders are struggling with how to oversee their companies' new AI platforms, and the findings underscore the need for improved AI governance and security practices.

  1. The report further underscores the urgency for improved cybersecurity measures and AI governance in data-and-cloud-computing, as hackers often exploit weak access controls on AI tools in data breaches, leading to phishing attacks and the compromise of other data stores.
  2. In light of the growing threat of AI-generated phishing and deepfake impersonation attacks in data breaches, it is crucial for organizations to prioritize their cybersecurity strategies, ensuring robust data-breach prevention measures and strong AI security hygiene to safeguard their technology investments.

Read also:

    Latest