Skip to content

UAE Launches Pioneering AI Security Lab to Certify Trusted Systems

A first-of-its-kind facility is setting new standards for AI trust. Will this lab redefine how the world secures artificial intelligence?

The image shows the logo of the Directorate of Intelligence Central Intelligence Agency, which...
The image shows the logo of the Directorate of Intelligence Central Intelligence Agency, which consists of a blue circle with a white star in the center and the words "Directorate of Intelligence" written in white lettering around the edge. The star is surrounded by a white border and the logo is set against a blue background.

UAE Launches Pioneering AI Security Lab to Certify Trusted Systems

A new AI security lab has launched in the UAE to assess and certify artificial intelligence systems. The facility, already operational, will test tens of thousands of AI agents each year against strict security and compliance standards. It was created through a partnership between the UAE Cyber Security Council, Open Innovation AI, Cisco, and Emircom. The lab evaluates AI models across six critical areas: model security, threat defence, data integrity, supply-chain security, agent autonomy, and regulatory compliance. Systems that meet its requirements will receive a national certification mark, allowing UAE-based developers to bring trusted products to market.

Assessments follow international benchmarks, including ISO 42001, MITRE ATLAS, NIST AI RMF, and OWASP frameworks. The infrastructure combines Cisco’s secure networking, NVIDIA GPU-powered computing, and Open Innovation AI’s software platform. Dr. Mohamed Al-Kuwaiti, Head of Cyber Security for the UAE Government, described the lab as a sovereign capability for building secure and trustworthy AI. It will serve federal and local government bodies, critical infrastructure operators, and private sector organisations seeking certification.

The lab is now fully operational under the governance of the UAE Cybersecurity Council. Its assessments will help ensure AI systems meet both national and global security standards. Developers and organisations can use the facility to validate their models before deployment.

Read also:

Latest