Is the Middle East ready for the AI Revolution? Opportunities, Roadblocks, and Forecasts

From use cases of AI in government entities to navigating ethical concerns and job displacement, look at some of the key issues discussed at the NextTech Summit.

Reading Time: 5 Min 


The NextTech

More in this series

  • [Source photo: Venkat Reddy Marri/MITSMR Middle East]

    As artificial intelligence makes its way to business and society, a holistic approach is needed to reimagine processes, foster adoption, and develop the right capabilities. 

    At the NextTech Summit, renowned leaders and experts across various sectors shared key issues on use cases of AI in government entities, building a robust infrastructure, and navigating ethical concerns and the necessity of responsible practices. 

    Setting up a country for success 

    Dr. Ray Johnson, Chief Executive Officer of Technology Innovation Institute, gave an overview of Abu Dhabi’s Advanced Technology Research Council, allowing the emirate to utilize knowledge and innovation to boost the nation’s economy. As ATRC identified sectors such as security and defense, transportation, food and agriculture, space and aerospace, energy, sustainability, and health as integral to the entity, Dr. Johnson says, “The goal here is to create a proof of concept demonstration for these sectors. We can look for commercialization partners once the proof of concept is accepted.”

    “The idea is to encourage other countries to bring their solutions here, and set up operations in the UAE, and form this ecosystem of innovation that’s so important,” says Dr. Johnson. 

    Similarly, H.E. Younus Al Nasser, Chief Executive of Dubai Data & Statistics Establishment, Digital Dubai, shares an insight into the mandate of Digital Dubai and the next phase of its cybersecurity strategy. 

    “The next phase of the cybersecurity strategy in Dubai is focusing on our cybersecurity regarding our society. It’s also looking to expand the cybersecurity scope not only of the government but also across the entire city. We are looking towards active cyber collaborations between organizations and centers looking into cybersecurity,” says H.E. Al Nasser. 

    “We are also looking to collaborate with them in finding new and advanced solutions that minimize any risks we have. Also, we’re looking at building and incubating innovative solutions within the cybersecurity sector of Dubai.” 

    Harnessing AI for government entities

    As organizations and entities leverage the potential of generative AI large language models, it’s worth looking into the significance of a robust infrastructure. 

    In August, a new AI large language model for Arabic was released by Inception, a unit of G42, along with the Mohammed bin Zayed University of Artificial Intelligence and Silicon Valley-based Cerebras Systems. Jais is trained on the Condor Galaxy, using 116 billion Arabic tokens and 279 billion English tokens. It can comprehend language, context, and cultural references by capturing the linguistic nuances of various Arabic dialects. 

    Jais is trained on the Condor Galaxy, using 116 billion Arabic tokens and 279 billion English tokens.

    “Training models like this would be impossible without large, efficient computers,” says Maryam Alhosani, HPC and Quantum Computing Specialist, G42. “Picture the hardware as the engine in the car – the performance in this engine determines how quickly we can navigate the vast landscape of AI training.”

    Along with the models’ complexity and the data’s enormity, the hardware driving the process is also important. “This potent combination of data algorithms and hardware efficiency propels us towards the forefront of AI innovation, enabling us to tackle problems and enable solutions beyond our imagination.”

    Talking about developing Jais, Tim Baldwin, Professor of Natural Language Processing at the Mohammed Bin Zayed University of Artificial Intelligence, said they faced issues with open access to high-quality Arabic data. As a result, the team experimented with Arabic data, “[We] mixed Arabic and English data to see if we can get any cross-lingual transfer to boost the performance of Arabic through language mixing.” 

    The potent combination of data algorithms and hardware efficiency propels us towards the forefront of AI innovation.

    Baldwin notes that it is developed for government use, as well as the financial, energy, climate, and healthcare sectors, “It’s not just linguistically attuned, but also culturally attuned to the region much better than other models are.”

    Deciding to offer Jais as an open source is a noteworthy feat, as it significantly contributes to the development of AI capabilities and the rise of Arabic language content. Dr. Andrew Jackson, Chief Executive Officer of Inception, calls it a bold move, which has received positive reactions globally and regionally. “What we wanted was validation, feedback, and support, and I think that’s what we achieved,” he says. With another version coming up, Dr. Jackson says they’re retraining the model on the Cerebrus supercomputer and will start semi-regular releases that capture the feedback received. “It’s also opened up a conversation on what’s important in a large language model for the proponents across the UAE,” he adds. 

    From another perspective, Paola Pisano, Professor of Innovation and Disruptive Technology at the University of Turin and former Italian Minister of Technological Innovation and Digitization, gave a deep insight into harnessing AI technology in Italy’s government sector. After the outbreak of the COVID-19 pandemic, Pisano recalls, “I felt a sense of urgency to find a better approach inside the government to detect a dangerous situation that could have been identified if the information, and the availability of information that we had was better analyzed.”

    Inspired by this, Pisano and her team proposed using AI and an early warning system to improve a country’s ability to detect and anticipate a crisis before it occurs. “Our system identified patterns and anomalies in the data that may indicate the likelihood of conflict by providing a probability.” She explains, “We use a machine learning transformer model and capture relevant temporal dependencies.” 

    From identifying anomalies to collecting data and creating a database to the training phase, the concept showed promising results. Eventually, Pisano hopes it will “support government and international organizations to anticipate conflict and instability before they offer. We hope to support them to act and define a contingency play to deescalate the risk and change the direction of history.” Transparency is also a notable trait of the system, says Pisano. “It’s a tool to provide the probability of the conflict, but also the cause and variables of the conflict.” Besides government entities, Pisano aims to help private sectors understand problems and identify new industries to test the system. 

    Navigating ethics in utilizing AI 

    The significance of ethical and responsible AI was a focal point of discussion. 

    As organizations become keener to use LLMs in the market, Lake Dai, Adjunct Professor of Applied AI at Carnegie Mellon University, points out some challenges, such as high cost and alignment issues. “If you’re a public company using a third party’s API to generate results, you want to ensure that you have visibility and control on alignment issues, which are called AI Hallucinations, wherein the AI would generate false information.”

    Data privacy and intellectual property are The biggest factors in using commonly used LLMs. “When you use an LLM, your information could be leaked in different processes. The process could be [embedded] in the model training. When you give your information to the training model, there could be a potential leak,” says Dai. To counter this issue, she suggests using open source LLM solutions or putting a privacy scraping layer to replace sensitive information.”

    When you give your information to the training model, there could be a potential leak.

    Upholding privacy was a factor that Jais’ team took into developing the Arabic AI LLM, Baldwin explains, “It’s open source, downloadable, deployable for institutions, and can be fine-tuned to your data without exposing that data to any other organizations – different to any other training models out there. That was a deliberate choice.” 

    The team also implemented four safeguards, says Baldwin, including identifying different dimensions of AI safety, developing prompt responses, filtering out toxic responses, and guiding it to answers appropriately. For example, on issues such as self-harm or violence, rather than the system outright refusing, it would give reasons why it’s not appropriate content. “There’s a lot of effort and testing that we’ve put on the AI safety side of things,” says Baldwin. 

    Elias Baltassis, Partner and Vice President, BCG, UAE, offered a brief on ethical challenges with artificial intelligence, such as biases and moral dilemmas. The rise of generative AI has brought on a whole set of issues. “What’s changing now is I believe we’ll reach an order of magnitude because generative AI allows the production of huge quantity of extremely quality messages – meaning, all the deep fakes you can see.”

    He explains, “The targeting you can do with generative AI is much more powerful than ever.” Though regulations play a key, it’s only part of the answer. He also points out, “Excessive regulation is not the answer because it kills motivation.” 

    Baltassis says the key is for organizations to adhere to responsible AI principles, policies, and tools integrated into companies and government entities. 

    Tackling roadblocks and leveraging opportunities 

    Munther Daleh, a Professor at the Massachusetts Institute of Technology, emphasized the urgency of access to extensive datasets. “Accuracy is extremely important. It’s not just about generating data. I can generate pure garbage data, but it’s very important to ensure accuracy in the data we create.”

    With the popularity of LLMs such as ChatGPT and Bard, Daleh points out, “They’re believable, and because they’re believable, the privacy issues matter there. The more information they have about you, the more it allows that language to pose to you as a trusted agent. It says things and interacts with you properly so that you may feel comfortable sharing private information, and that’s something we need to be careful about.”

    Excessive regulation is not the answer because it kills motivation.

    Dr. Alex Aliper, co-founder and President of Insilico Medicine, found that time constraints are a common hurdle in utilizing LLMs in biotechnology. Dr. Aliper’s enterprise is a generative AI-driven biotech company that helps accelerate drug discovery and development with its evolving proprietary Pharma. AI platform across biology, chemistry, and chemical development. “You can’t verify an answer and output by the model quickly,” says Dr. 

    Aliper. “You have to go through testing, validation, and experimentation that can take months or years. In contrast to ChatGPT, it’s a very time-consuming process to validate the output.” The company first developed its model in 2015, when generative AI was still gaining traction. “But it gave us enough time to validate, put new case studies, and validate our engine thoroughly.” 

    Validating data is also integral to ensuring the company adheres to safety and ethical regulations. For Dr. Aliper and his enterprise, such guidelines guided the team. “We have very strict guardrails of validation, so whatever idea we come up with, we need to validate that it’s safe and efficacious, and only then can we offer it to the market.”

    The future of jobs

    While AI technology creates opportunities across sectors, job displacement and the rising skill gap in the AI field must be considered. 

    Talking about looming job displacement due to advanced technologies, Calum Chace, author and futurist, says, “Yes, technological unemployment will happen at some point. The good news is that this is still a long way into the future. But looking at today’s trends, there will be many jobs for the foreseeable future. Will we be able to predict which jobs will be automated first? We’ll all have to be better at automating jobs, frequently changing jobs, careers, and industries.”

    On the other hand, in the healthcare industry, Brandon Rowberry, Chief Executive Officer of Aster DM Healthcare – Digital Health, believes AI can have an exponential impact as an “additional” caregiver. The organization already benefits from AI technology in use cases such as radiology scans and training employees. “What’s exciting for us is the ability to spread the tech globally, especially in the developing world. It can be a sidearm for caregivers to access a wealth of knowledge and provide that care.” 

    He continues, “We have a shortage of doctors, nurses, and caregivers coming up, and as that spreads, we’re going to see the greatest increase in health availability and healthcare due to some of these technologies coming in.”

    Will it be humans vs. AI in the future? 

    As leaders and organizations embrace AI’s rapid opportunities, Dai asserts it’s worth considering how the next generation should be taught. “For decades, we’ve tried to focus on knowledge transfer. Should we rethink how we prepare our kids for the next generation to co-work with AI?” 

    She continues, “Many people ask me, is AI vs. humans in the future? Or are we all going to be co-pilots? We must learn how to be co-pilot with AI in the future.” She elaborates, “I think AI is still like a baby. It’s still very young. All we have to do today is train the models. When you think about training an LLM, you’re training like a parent. In the future, it’s important to give the right data, knowledge, and guidance and become a parent. I’m here today to encourage everyone to think about how we co-pilot and ‘parent’ AI into a better world.”

    The summit hosted by MIT Sloan Management Review Middle East on September 20 at The Ritz-Carlton, Dubai, had the Technology Innovation Institute as the presenting partner and Digital Dubai as the strategic government partner. G42 and Boston Consulting Group joined as the strategic sponsor and the gold sponsor.


    The NextTech

    More in this series

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.