Skip to content

Urgent Alert for iPhone and Android Users: Disconnect Immediately, Use This Specific Sequence

Smartphone users worldwide are being urged by security professionals, including the FBI, to disconnect and establish confidential passwords due to persistent cyber threats.

Mobile phone screen shows FBI emblem on display.
Mobile phone screen shows FBI emblem on display.

Urgent Alert for iPhone and Android Users: Disconnect Immediately, Use This Specific Sequence

Here's the Rewritten Piece:

Update, March 22, 2025: Originally published March 20, this article has been updated with new information from Europol about the shifting landscape of criminality and the use of AI attacks, as well as ongoing AI cyber threats that Gmail users are facing and the FBI's warning to use a secret code that they've issued in response.

It seems like every day, we're being bombarded with warnings about AI-powered security threats. From code that can compromise your Chrome password manager credentials to critical AI attacks costing hackers as little as $5 to create, it's becoming a nightmare. But it's the deepfake attacks hitting smartphone users that are causing the most concern. Even though Google and others are doing their best to defend against these attacks, they're still so convincing that security experts—including the FBI—have issued warnings. Here's what you need to know and what you should do.

ForbesGmail's New AI Update Has an Unexpected Security Twist## Smartphone Attacks: What You Need to Know

When you think of deepfake attacks, you might think of face-swapping videos. But it's far more complex than that. If you want to test your ability to spot a deepfake face, there's a quick test you can take, but be warned—it's much harder than you think. Voice fakes, driven by AI, really grabbed public attention after I wrote a viral article in 2024 concerning a security expert who almost got fooled, with potentially costly consequences.

Cybersecurity expert Adrianus Warmenhoven, at NordVPN, has joined the voices warning iPhone and Android users about the threat. "Phone scammers increasingly use voice cloning tools for their fraudulent activities because this kind of software has become more affordable and effective over time," Warmenhoven shared. A common approach that's being seen in ongoing attacks is to use this deepfake audio to "approach family members of the individual they're impersonating," Warmenhoven explained, "and extort money by simulating an emergency."

Gmail: Google Confirms Upgrade Requiring User Decision

FBI Alert: iPhone and Android Users 'Bombarded' by Chinese Attacks

Ukraine Fires Old Soviet Dogfighting Missiles in Defiance

Referencing an October 2024 report from Truecaller and The Harris Poll, America Under Attack: The Shifting Landscape of Spam and Scam Calls in America, Warmenhoven pointed to the shocking statistic that, across just the U.S. alone, over 50 million people fell victim to phone scams in the previous 12 months and losses were estimated at $452 per victim. "As deepfakes dramatically change the landscape of scam phone calls," Warmenhoven warned, "it's crucial to ensure that everyone in the family understands what voice cloning is, how it works, and how it could be used in scams like impersonating a family member to request money or personal information."

"Deepfakes will become unrecognizable," Siggi Stefnisson, cyber safety chief technical officer at trust-based security platform Gen, whose brands include Norton and Avast, warned. "AI will become sophisticated enough that even experts may not be able to tell what's authentic."

ForbesA New Twist in Gmail's AI Update## Europol and FBI Warn of Changing AI Attack Threats

Europol has confirmed that organized crime is evolving, and they're now more adaptable and dangerous than ever. The new European Serious Organised Crime Threat Assessment issued a stark warning that crime is being accelerated by AI and emerging technologies. "AI is fundamentally reshaping the organized crime landscape," it said. By rapidly exploiting these new technologies, using the accessibility, adaptability, and sophistication of AI today, threat actors have added a powerful attack tool to their arsenal. "These technologies automate and expand criminal operations, making them more scalable and harder to detect," the assessment warned.

AI is increasingly being used in online fraud schemes, Europol said, driven by social engineering attacks that can lead to access to vast amounts of data, including stolen personal information. "Nearly all forms of serious and organized crime have a digital footprint," the assessment stated, "whether as a tool, target, or facilitator."

"The value of AI is that it makes things faster," Evan Dornbush, a former NSA cybersecurity expert, shared. "Not more creative or inventive or persistent." Dornbush is right. Attackers can create sophisticated and believable messages in double-quick time, and most importantly, keep tweaking these automatically so every iteration is more believable than the last. "But speed is irrelevant if we cannot disrupt the attacker's profit potential," Dornbush concluded. "AI is decreasing the costs for criminals, and the community needs novel ways to either decrease their payouts, increase their operating budgets, or both."

"Breaking this new criminal code means dismantling the systems that allow these networks to thrive," De Bolle confirmed. "Targeting their finances, disrupting their supply chains, and staying ahead of their use of technology."

ForbesRussian Broker Offers $4 Million for Telegram Zero-Day App Attack## FBI Deepfake Smartphone Audio Attack Mitigation Advice

As I reported Dec. 7, 2024, the FBI has also been warning the public of such attacks. Indeed, the FBI issued a public service alert number I-120324-PSA addressing this very subject. Both the FBI and Warmenhoven recommend the same mitigation, although it might sound brutal and startling. To protect yourself, they advise hanging up the phone immediately if you get a call claiming to be from a family member or close friend asking for money like this, and verify their identity using direct means. The FBI also warns that everyone should create a secret word or phrase that's only known to you and your close contacts and use this to identify a caller claiming to be someone in trouble, no matter how convincing they sound. And trust us, convincing they'll be, because the deepfake call will be based on public audio clips—from social media videos, for example—that are then fed through the AI tooling to produce, in effect, that person saying anything that's typed in.

Warmenhoven also advised being cautious about the content of your social media postings, because this is "the largest publicly available resource of voice samples for cybercriminals." This means everyone should be wary of what they post, as it could be used to negatively impact their security through the rise of deepfakes, voice cloning, and other scams enabled by AI tools.

To mitigate the risk of these sophisticated and increasingly dangerous AI attacks against iPhone and Android users, the FBI suggests:

  1. Delete suspicious texts immediately.
  2. Verify information directly. If a message claims to be from a legitimate organization, verify any claims directly through official channels without clicking on links provided in the message.
  3. Avoid clicking dubious links. Never click on links from unexpected messages. Instead, manually type the URL of the official website you wish to visit.
  4. Regularly update your devices and apps. This will ensure you have the latest security enhancements.
  5. Install antivirus software. This can help identify phishing attempts.
  6. Use a password manager. A reliable password manager can safeguard sensitive information and prevent you from entering details on fraudulent sites.
  7. Report suspicious activities. Promptly report suspicious texts to your mobile carrier or the FBI’s Internet Crime Complaint Center to assist in tracking down scammers.

Following these steps can help protect you from sophisticated phishing attacks, regardless of whether they're AI-powered or not.

ForbesYou Have 7 Days to Act Following Gmail Lockout Hack Attacks, Google Says

Enrichment Data:Currently, the FBI's primary focus regarding smartphone attacks involves smishing scams, which often use AI or sophisticated phishing kits to target mobile users. Although the FBI hasn't specifically issued recommendations against AI-powered attacks, their general advice on protecting against smishing and similar cyber threats can be applied broadly:

  1. Delete suspicious texts immediately.
  2. Verify information directly through official channels.
  3. Avoid clicking on dubious links and manually type the URL of the official website you wish to visit.
  4. Regularly update your devices and apps to ensure you have the latest security enhancements.
  5. Consider installing antivirus software.
  6. Use a reliable password manager.
  7. Promptly report suspicious texts to your mobile carrier or the FBI's Internet Crime Complaint Center.

By following these steps, you can help mitigate the risk posed by sophisticated phishing attacks, regardless of whether they're AI-powered or not.

  1. The FBI has issued a warning about deepfake audio attacks targeting iPhone and Android users, with scammers impersonating family members and extorting money.
  2. As deepfakes become more sophisticated, security experts, including the FBI, warn that even the most convincing deepfake calls may be difficult to distinguish from the real thing.
  3. In response to the increasing use of AI in criminal activity, Europol has issued a warning about the evolving threat landscape and the growing role of AI in online fraud schemes.

Read also:

    Latest