Skip to content

AI adoption surges but security and governance lag behind

From customer service to threat detection, AI is reshaping operations—but weak policies leave gaps. Who’s leading the charge, and who’s falling behind?

In this image, we can see an advertisement contains robots and some text.
In this image, we can see an advertisement contains robots and some text.

AI adoption surges but security and governance lag behind

Large language models (LLMs) are no longer just experimental tools for businesses. Companies now use them actively in daily workflows, from customer service to internal operations. Yet, as adoption grows, concerns about security and governance have come to the fore.

Most enterprises rely on just four AI models—those from Microsoft, OpenAI, Google, and Anthropic. This consolidation reflects a push for standardisation across industries. Meanwhile, AI is being tested for security tasks like threat detection, incident investigation, and automated response.

Formal governance frameworks are emerging, but progress remains uneven. Only a quarter of organisations have full AI security policies in place. Where governance exists, it improves coordination between executives and security teams. It also encourages staff training on AI tools and best practices.

Agentic AI, which can perform semi-autonomous tasks, is now part of operational planning. Companies are exploring its use in areas like incident response and access management. Clear policies help reduce the risks of unapproved tools and informal workflows.

The gap between prepared and unprepared organisations comes down to governance. Those with structured AI security frameworks align leadership with technical teams more effectively. While executive support for AI remains high, confidence in securing these systems lags behind.

Read also:

Latest