AI Phishing Is Rising in the UAE—How Can Professionals and Businesses Stay Safe?
Phishing campaigns now resemble legitimate digital communications, challenging even well-trained professionals to spot the difference.
Topics
News
- AI Phishing Is Rising in the UAE—How Can Professionals and Businesses Stay Safe?
- CFOs Rethink ROI as AI Moves to Center Stage in Enterprise Strategy
- Core42 Deploys OpenAI GPT-OSS Models for Sovereign AI Access
- UAE Sets Global Standard for Human-Centric Digital Transformation
- Cybersecurity Consolidation Gains Momentum Across the Middle East
- OpenAI’s GPT-5 is Finally Here!

[Image source: Chetan Jha/MITSMR Middle East]
Artificial intelligence (AI) has intensified phishing into a highly personalized and sophisticated threat. Advanced language models allow attackers to generate emails, messages, and websites that convincingly imitate legitimate sources, eliminating the linguistic errors that once betrayed fraudulent communications. AI-powered bots on social media and messaging platforms can mimic real users, sustaining conversations over time to cultivate trust. These systems increasingly underpin romantic or investment scams, leveraging AI-generated audio messages and deepfake videos to entice victims into fabricated opportunities.
According to newly released threat intelligence, phishing attacks in the UAE rose by 21.2% in the second quarter of 2025 compared to the previous quarter. This significant increase is part of a wider global trend, where threat actors are harnessing artificial intelligence and evasive tactics to create highly believable, automated scams that circumvent traditional security systems.
Phishing campaigns now resemble legitimate digital communications, challenging even well-trained professionals to spot the difference. Olga Altukhova, a security expert at Kaspersky, notes that the convergence of AI with evasive methods has transformed phishing into “a near-native mimic of legitimate communication,” pushing the boundaries of what users are able to detect. She adds that today’s attackers are no longer just after passwords. They are targeting biometric data, electronic and handwritten signatures, raising the risk of long-term identity compromise and reputational damage.
Cybercriminals are also increasingly leveraging trusted platforms to host and disseminate malicious content. For example, Telegram’s Telegraph feature, which was designed for sharing long-form posts, is being misused to host phishing material. Similarly, attackers are exploiting Google Translate’s page translation functionality to generate disguised URLs (e.g., https://site-to-translate-com.translate.goog/…) that slip past email security filters by appearing legitimate.
Even CAPTCHA, typically seen as a security mechanism, is now part of the phishing toolkit. By embedding CAPTCHA challenges on phishing websites, attackers can mimic the behavior of trusted sites and deflect detection from automated anti-phishing tools. This tactic adds a layer of perceived legitimacy, making it more difficult for both users and systems to recognize the threat.
Meanwhile, AI-driven deception techniques are growing more sophisticated. Deepfake audio and video are being used to impersonate bank employees, company executives, and public figures. Victims have reported receiving calls that sounded indistinguishable from their bank’s fraud department, asking for two-factor authentication (2FA) codes to “secure” their accounts. Once provided, these codes are used to gain unauthorized access.
Altukhova emphasizes that by co-opting platforms like Telegram and Google Translate—and weaponizing tools like CAPTCHA—attackers are outpacing traditional defenses. This shift means users must become more skeptical and proactive, as conventional cues of legitimacy can no longer be trusted.
How UAE Professionals and Organizations Can Respond
Given the pace of change in phishing tactics, organizations in the UAE must take a proactive, multi-layered approach to protection. Experts recommend:
- Investing in robust security solutions capable of detecting and blocking phishing content powered by AI, before it reaches users.
- Maintaining a healthy skepticism around unsolicited links, messages, or calls, even when they appear genuine.
- Being cautious with digital identity assets, particularly camera access and signature uploads on unfamiliar or unverified websites.
- Watching for deepfake indicators, such as unnatural speech patterns or inconsistencies in facial movements, especially in high-stakes communication.
- Reducing exposure by limiting the online sharing of personal identifiers, ID documents, or confidential business materials unless absolutely necessary.
A New Era of ‘Zero-Click’ Exploits
A particularly alarming development in this threat landscape is the emergence of “zero-click” phishing. In a campaign identified earlier this year as Operation ForumTroll, attackers targeted media outlets, universities, and government institutions in Russia by sending tailored invitations to the “Primakov Readings” forum. Victims who clicked the link were compromised immediately—no downloads, forms, or further action required.
The attack exploited a then-unknown vulnerability in the latest version of Google Chrome. To avoid detection, the malicious links remained live for only a short window, then redirected users to the legitimate event site once the exploit was complete, minimizing traceability and buy-in from forensic analysts.