Skip to content

Artificial Intelligence Advancements: Challenges for Corporations and Offered Remedies

Business Artificial Intelligence Development: Threats to Commercial Entities and Potential Solutions Encompassing Comprehensive Know Your Customer and Anti-Money Laundering Strategies - The Sumsuber's Leading Practices

Artificial Intelligence Advancements: Corporate Dangers and Potential Remedies
Artificial Intelligence Advancements: Corporate Dangers and Potential Remedies

Artificial Intelligence Advancements: Challenges for Corporations and Offered Remedies

In the rapidly evolving world of artificial intelligence (AI), the line between innovation and misuse is becoming increasingly blurred. From facilitating fraud to spreading misinformation, AI can be wielded by both businesses and criminals alike. This article explores the regulatory efforts being made by various countries to combat AI-generated fraud, particularly deepfakes.

China has taken a comprehensive approach to regulating AI-generated content. Starting from September 2025, all AI-generated or AI-altered media (images, video, audio, text, VR) will carry visible labels (like watermarks or captions) and invisible digital signatures embedded in the content’s metadata. Platforms are responsible for verifying these labels and must require user declaration if labels are absent. Any removal or alteration of AI watermarks is banned. If content is suspected to be AI-generated but unmarked, it is labeled as “suspected synthetic” for viewers. This regulatory framework positions the state as a gatekeeper supervising AI content integrity and includes efforts to centrally manage identity verification and user data.

The European Union (EU) emphasizes transparency, user consent, and robust governance measures to combat AI misuse. The EU also focuses on protecting individuals against non-consensual AI-generated media, especially in contexts like misinformation and privacy violations.

The United Kingdom (UK) is exploring legislation specifically targeting misuse of synthetic media for fraud and harm, such as non-consensual deepfake pornography and misinformation. Calls from UK-based legal professionals highlight the need for tailored laws distinct from existing data protection or defamation laws, as current legal instruments are inadequate for addressing AI-specific harms.

In the United States (US), known approaches involve a combination of state-level laws addressing deepfakes, disclosure requirements, and ongoing debates about federal legislation focusing on transparency and combating synthetic media fraud.

Key regulatory measures noted across these jurisdictions include mandatory labeling and watermarking of AI-generated content, platform responsibility to verify, detect, and flag synthetic content, prohibition on removal or alteration of AI-generated content markers, legislation targeting specific harms from deepfakes, centralized control or gating of AI-generated data and identity verification, and calls for forward-looking and cross-sector regulatory coordination.

As the number of deepfakes detected worldwide increased tenfold from 2022 to 2023, regulations are necessary to keep the playing field safe for everyone. AI-powered fraud was the most trending type of attack in 2023, according to Sumsub's Identity Fraud Report. However, the regulatory landscape reflects a balance between fostering AI innovation and protecting individuals and society from synthetic content fraud and abuse.

While China exemplifies a centralized and mandatory approach, Western regions focus on legal adaptation and consent frameworks. The UK, for instance, adopted a "business friendly" approach to AI regulation, releasing the AI White Paper in 2023 and presenting the AI (Regulation) Bill in response to calls for a more assertive approach towards AI governance.

AI can also be used to verify identities with biometrics, reducing the chances of fraud. As AI technology continues to advance, it is crucial for regulations to evolve alongside it, ensuring a safe and secure digital landscape for all.

  1. The regulatory landscape in both China and the European Union (EU) emphasizes transparency and user consent, with China implementing a mandatory labeling and watermarking system for AI-generated content, while the EU aims to protect individuals from non-consensual AI-generated media.
  2. In the United States (US) and the United Kingdom (UK), regulations focus on combating specific harms from deepfakes, such as fraud and misinformation, and encourage cross-sector regulatory coordination as AI technology continues to advance.

Read also:

    Latest