Skip to content

Businesses overlook AI security, providing opportunities for cyber attackers to exploit

Rapid Adoption of Technology Without Proper Security Measures, and Governance Falls Short in IBM's Recent Report

Companies overlook AI security, prompting attackers to capitalize on this vulnerability
Companies overlook AI security, prompting attackers to capitalize on this vulnerability

Businesses overlook AI security, providing opportunities for cyber attackers to exploit

Article Title: IBM's 2025 Cost of a Data Breach Report Highlights Growing AI Security Challenges

In IBM's latest Cost of a Data Breach Report, findings regarding AI security and governance issues are presented, revealing a concerning lack of basic access controls for AI systems. According to IBM's VP of Security and Runtime Products, Suja Viswesan, this leaves highly sensitive data exposed and models vulnerable to manipulation.

The report highlights the danger of unsanctioned or shadow AI, referring to the unofficial use of these tools within an organization without IT or data governance team's knowledge or approval. Approximately one-third of the organizations that experienced an AI security incident suffered operational disruption and unauthorized access to sensitive data.

The findings come as AI-related exposures currently make up a small proportion of total data breaches, but are anticipated to grow with increased AI adoption in enterprise systems. In fact, 87% of the organizations surveyed have no governance in place to mitigate AI risk.

The majority of organizations that reported an intrusion involving AI attributed the source to a third-party vendor providing software as a service (SaaS). Supply chain compromise was the most common cause of AI-related breaches, including compromised apps, APIs, and plug-ins.

The report also reveals that 17% of the organizations suffered reputational damage due to the AI-related attack, and 23% incurred financial loss as a result. Moreover, two-thirds of the organizations that were breached did not perform regular audits to evaluate risk.

In response to these challenges, the cybersecurity industry is shifting towards AI-first architectures in Next-Generation Security Operations Centers (SOC). These new SOCs integrate machine learning, automation, and real-time analytics to manage alert noise and speed up incident response. AI's role is to empower human analysts with more context and precision rather than replace them.

Threat detection is evolving from static, rule-based systems to AI reasoning models leveraging behavioral analytics and anomaly detection, helping to reduce false positives and address sophisticated attacks more effectively. However, security leaders expect daily AI-driven attacks in 2025, requiring continuous updates to security postures and workflows.

Despite broad awareness of AI risks and upcoming regulations, only 25% of organizations have fully implemented AI governance programs. Many companies struggle with unclear ownership of AI oversight, limited internal expertise, and resource constraints. A substantial confidence gap remains in oversight of third-party AI models, with only about two-thirds conducting formal AI risk assessments for external systems, increasing risk exposure.

Transparency, explainability, and bias mitigation are critical priorities for AI governance, especially in regulated sectors like finance and healthcare. Governments, including U.S. agencies, emphasize secure-by-design principles, AI assurance frameworks, and collaborative vulnerability sharing to bolster resilience and trustworthy AI deployment, particularly in critical infrastructure and national security contexts.

In conclusion, the core AI security and governance challenges highlighted by IBM’s contemporaries and likely reflected in their 2025 report are the need to manage increasingly AI-driven and sophisticated cyber threats, bridge governance and compliance gaps, implement robust third-party AI risk assessments, and ensure secure, explainable, and compliant AI deployment in fast-evolving regulatory environments. As AI becomes more deeply embedded across business operations, AI security must be treated as foundational, and the cost of inaction isn't just financial, it's the loss of trust, transparency, and control.

  1. The IBM Cost of a Data Breach Report reveals a concerning lack of basic access controls for AI systems, leaving highly sensitive data exposed and models vulnerable to manipulation.
  2. Approximately one-third of the organizations that experienced an AI security incident suffered operational disruption and unauthorized access to sensitive data.
  3. The report anticipates that AI-related exposures, currently a small proportion of total data breaches, will grow with increased AI adoption in enterprise systems, as 87% of the organizations surveyed have no governance in place to mitigate AI risk.
  4. The majority of organizations that reported an intrusion involving AI attributed the source to a third-party vendor providing software as a service (SaaS).
  5. In response to these challenges, the cybersecurity industry is shifting towards AI-first architectures in Next-Generation Security Operations Centers (SOC), including machine learning, automation, and real-time analytics to manage alert noise and speed up incident response.

Read also:

    Latest