Why Should Leaders in the Middle East Care About Deepfakes?

As AI becomes widely accessible, the threat of deepfakes is only expected to increase.

Reading Time: 4 min  

Topics

  • [Image source: Krishna Prasad/MITSMR Middle East]

    In the latest deepfake scam, a finance worker at a multinational firm in Hong Kong was tricked into paying out $25 million to fraudsters posing as the company’s chief financial officer. All it took was a phishing email and a video call.

    The criminals used artificial intelligence (AI) to manipulate the video with deepfake replications of other staff members to persuade the worker to transfer. Although the employee was initially suspicious, his doubts were dismissed after the video call. The people on screen looked and sounded like his colleagues. The fraud was discovered only when the employee checked with the company’s head office. 

    This is not the only case of deepfake fraud. According to a survey by Regula, a staggering one out of every three businesses fell victim to such scams in 2023. 

    Deepfake fraud typically uses GenAI to produce videos and images impersonating others. “When using GenAI, there are deep learning technologies to create computer-generated voices, images, and videos,” says Vladislav Tushkanov, Group Manager of Machine Learning at cybersecurity firm Kaspersky. “So, it never happened in real life and has never been produced by the person.”

    What’s Real and What’s Not

    With advancing AI technology, distinguishing between genuine and fabricated media is becoming increasingly difficult. 

    A recent Kaspersky survey found that only 20% of Saudi Arabian employees could determine a deepfake from a real image, although around 41% said they could tell them apart.

    But there are some telltale signs, according to Tushkanov. “There will be problems when a person moves their head from side to side, the skin tone can also look different, and there will generally be dim lighting in the video,” he says. “However, all these problems are usually hidden by those who create deepfakes. For example, they can say that they have a slow connection, that is why the camera is so blurry, and so forth.”

    He adds that the responsibility of determining deepfakes shouldn’t only rest on employees, but also on an organization’s processes. “If you can transfer millions of dollars just because of a phone call, then this is not the problem with your ability to determine real video from a synthetic video, but rather with the process,” he says. “When such amounts of money are concerned, there should be much stricter processes.”

    As AI becomes widely accessible, the threat of deepfakes is only expected to increase. A few years ago, before the AI boom, it was considered expensive to develop AI models, and one needed the technical background and skills to do it. But with the technology becoming cheaper and easier to use, anyone can now gain access to such capabilities and potentially use them for malicious purposes.

    “Today, technology is becoming cheaper,” says Sari Hweitat, Product Manager at Cequens, a telecommunications service provider. In some applications developed over the past two years, you don’t have to be knowledgeable in AI or have the technical skills to produce such a video image or audio recording. So, this is definitely increasing the threat.”

    Defending Against Deepfakes

    With the risks of deepfakes becoming clear, organizations are finding ways to protect themselves against such online security threats. 

    Cequens relies on advanced security protocols to protect the company and its clients from falling victim to online scams. “We work in the communications space, and deepfakes rely heavily on communications,” says Hweitat. “So, security measures and advanced security protocols are something we consider business as usual, not only to protect the company but also to protect our clients.”

    While security measures are necessary but not enough to protect against AI threats. Organizations also need to raise awareness and educate workers — from lower-level employees to top executives- on the rise and risks of deepfakes.

    “Raising awareness is crucial,” says Dr. Hao Li, Associate Professor at Mohamed bin Zayed University of Artificial Intelligence, “For people, it can become more and more difficult to detect deepfakes especially since those who create them put effort in hiding that it is AI manipulated content. You need to know that they can be faked. So, it’s important for people to be educated that not everything they see is real.”

    He adds that social media platforms also have a role in mitigating the rise of deepfakes. Since most people get their information on social media channels or messenger apps, tech companies should enforce regulations requiring people to disclose whether the content created was AI-generated. “Regulations should be on platforms where people might be the most vulnerable,” says Li. “The danger is especially when things can go viral, and these are the places where people need to enforce mechanisms.” 

    Many are now turning to AI to detect deepfakes. GenAI and large language models can analyze audio or video content and run it through their algorithms to determine whether something is real or has been manipulated using AI. 

    “You have to use AI to detect deepfakes,” says Li. “Because they will come to a point where people won’t be able to tell the difference, but the AI system will be able to tell you if something is a deepfake, and why it is so, and potentially even extract things that humans can’t.”

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.