As AI Progress Accelerates, Global Leaders Call for Ban on Superintelligent Systems
In a newly published open letter by the Future of Life Institute, over 800 signatories, including AI pioneers, are calling for a binding prohibition on the creation of superintelligent AI.
Topics
News
- Google, Anthropic Discuss Expanding AI Partnership with Multibillion-Dollar Cloud Deal
- 4 Tools Middle East Retailers Can Leverage to Have Upper Hand
- YouTube Launches AI Tool to Spot and Remove Deepfakes of Popular Creators
- Lianhe Sowell to Set Up $132.5 Million Robotics Facility in the UAE
- As AI Progress Accelerates, Global Leaders Call for Ban on Superintelligent Systems
- Andrej Karpathy Says AI Agents Are Not There Yet

[Image source: Chetan Jha/MITSMR Middle East]
As artificial intelligence becomes a central force shaping economies, security strategies, and organizational futures, concerns about its unchecked advancement are no longer confined to research labs or regulatory circles. A growing international coalition spanning technologists, policymakers, and cultural influencers is now urging a global prohibition on the development of AI systems that could exceed human intelligence.
This appeal is not about halting AI innovation altogether. Rather, it reflects a strategic concern: that without robust safeguards, the pursuit of “superintelligence” could outpace humanity’s ability to control it, with profound consequences for governance, industry, and society.
In a newly published open letter by the Future of Life Institute (FLI), over 800 signatories, including AI pioneers Geoffrey Hinton and Yoshua Bengio, and public figures such as Stephen Fry, Steve Bannon, and Meghan Markle are calling for a binding prohibition on the creation of superintelligent AI.
“We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in,” the statement reads.
The signatories represent a rare convergence of viewpoints, with support from both Western and Chinese AI experts, such as Andrew Yao and Ya-Qin Zhang (former president of Baidu), as well as business leaders like Apple co-founder Steve Wozniak and Virgin Group’s Richard Branson.
Max Tegmark, president of FLI, emphasized the collective concern that ungoverned AI development poses a greater threat than competition among nations or corporations.
“It is our humanity that brings us all together here… More and more people are starting to think that the biggest threat isn’t the other company or even the other country but maybe the machines we are building,” Tegmark remarked.
The letter is supported by FLI polling data, showing that only 5% of Americans favor the status quo of unregulated AI development. Nearly three-quarters of respondents expressed a strong preference for tighter control.
This is not the first time FLI has led such an initiative. In March 2023, months after the launch of ChatGPT, the group organized a similar campaign, endorsed by Elon Musk and others calling for a temporary pause in AI development. However, Tegmark clarified that the latest appeal has a narrower focus.
“You don’t need superintelligence for curing cancer, for self-driving cars, or to massively improve productivity and efficiency,” he said. “This is absolutely not calling for a pause on AI development.”
Still, the momentum among global tech giants to develop artificial general intelligence (AGI) remains strong, with players like OpenAI, Google, and Meta pushing forward, often with minimal regulatory oversight.
The appeal also underscores rising bipartisan concern in the US, where signatories include former national security adviser Susan Rice and Admiral Mike Mullen, who served as chairman of the US Joint Chiefs of Staff under Presidents Obama and George W Bush.
“Loss of control is something that is viewed as a national security threat both by the West and in China. They will be against it for their own self-interests, so they don’t need to trust each other at all,” Tegmark noted.
Despite the growing urgency, regulatory responses remain fragmented. The EU’s AI Act, the most comprehensive legislation to date, is still being rolled out in phases. Meanwhile, in the US, state-level initiatives in California, Texas, and Utah are moving ahead, but a proposed federal moratorium on AI regulation was pulled from the national budget bill in July.
As the Middle East rapidly adopts AI to drive economic diversification, digital transformation, and innovation in sectors like energy, logistics, and healthcare, this global conversation takes on new relevance. Regional leaders must now weigh not only the competitive advantages of AI but also the strategic imperatives of governance, ethics, and long-term societal resilience.