The Great Agentic AI Shakeout: Why 40% of Projects Fail. Can the Middle East Defy the Odds?
Nearly 50% of agentic AI initiatives are expected to fail by 2027. Industry leaders discuss governance, data strategy, and scalable execution to secure real-world impact.
News
- Companies Can’t Scale AI Without First Fixing Their Data, Study Finds
- Amazon Threatens Perplexity to Back Off from Its Shopping Site
- Dubai Holding and Palantir Team Up to Launch Aither, Fueling Dubai’s AI Ambitions
- Microsoft Deepens UAE AI Push with $15B Investment
- ADNOC, Gecko Robotics Solidify Partnership with 3 Deals
- OpenAI Taps AWS in $38 Billion Cloud Partnership Deal
[Image source: Krishna Prasad/MITSMR Middle East]
Agentic AI has a buzz about it that many in the market want to capitalize on. Across the Gulf, from retail and banking to aviation and logistics, enterprises are rushing to embed intelligent agents that can plan, act, and learn autonomously.
However, amid the excitement, a sobering statistic has surfaced. According to Gartner’s latest forecast, by 2027, over 40% of agentic AI projects will be abandoned before delivering measurable value. The reasons are not always technical. In many cases, it’s a failure of strategy, governance, and scale discipline.
In a region where AI projects are as much about national competitiveness as corporate innovation, that kind of attrition isn’t just expensive, it’s existential.
The Promise and the Peril
Gartner’s 2025 forecast projects that by 2028, a third of enterprise software will embed agentic capabilities, and 15% of enterprise decisions will be made by autonomous agents. Yet the same study found that 42% of enterprises describe their adoption as “conservative or stalled,” citing unclear ROI and operational complexity.
Jad Khalife, Director of Engineering at Dataiku, believes the reason is simple: “Every machine learning or GenAI use case is rooted in data, and the quality of that data and whoever controls it ultimately determines model performance.” He adds that data governance is not an afterthought but the foundation of AI maturity. “Organizations need to control who has access to data and ensure it meets the right standards. Without that, you can’t achieve the performance or reliability you expect from these systems.”
The Hidden Gaps
High ambitions and low operational readiness are playing across industries. According to Kevin Kiley, CEO of Aria, “Most organizations are making big investments, but many of these AI initiatives stall because they underestimate the complexity of operationalizing autonomy at scale.”
The first major gap is scope. Many projects, Kiley says, start with a broad mandate such as “build an AI assistant for operations” without a clearly bounded, high-value use case. “Without defined success criteria, teams chase capability demos instead of business outcomes.”
The second is governance. “Agentic AI changes the risk profile from information risk to action risk,” he says. “Many organizations lack the policy and monitoring frameworks to manage autonomous decisions safely and compliantly.”
The third gap is integration. “Agentic systems need deep access to enterprise context like data, APIs, workflows,” Kiley continues. “Too often, they’re built in isolation from existing systems, leading to impressive prototypes that can’t execute meaningful actions in production.”
The Misconception That Kills Projects
Even promising pilot projects can conceal underlying weaknesses. Kiley points out early warning signs, which include “many teams involved but no one taking responsibility for success or failure,” as well as pilots that “perform well in a controlled environment but fail to achieve real success in production, where integrations, permissions, and handling of edge cases become important.”
He warns leaders to pay attention to the right metrics: If success measures focus on activity rather than outcomes—such as counting tasks automated instead of assessing their impact on revenue, speed, cost, or satisfaction, Kiley says, “You’re already in trouble.” He also emphasizes the importance of involving security teams from the beginning. “If governance is treated as a post-launch step, you’re almost guaranteeing that compliance or privacy issues will derail the project later.”
Fawad Qureshi, Field CTO at Snowflake, points to a deeper misconception about readiness. “People think they can just slap agentic AI on top of poorly configured infrastructure and magic will happen,” he says. “If eight experienced data scientists can’t find an address column in a shipping company’s database, what chance does a hallucinating model have?”
He recalls telling clients, “If you have an old business process and you put an expensive new technology on it, what you get is an expensive old business process.”
AI doesn’t fix broken foundations; it amplifies their flaws.
Why Projects Stall
Gartner’s data underscores that lack of governance and integration are the top reasons agentic AI projects fail to scale. Khalife believes success begins with mindset. “The customers who succeed treat this as a business problem powered by technology, not a technology problem,” he says. “Without business endorsement and clear metrics, the technology won’t stick.”
Emphasizing that the difference between winning and lagging organizations comes down to how they treat data, Qureshi says, “Are you treating data as digital exhaust or as digital fuel? The organizations that win use data continuously to refine and improve their processes.”
He notes that Gulf-based digital-native companies, especially in the ride-hailing, logistics, and delivery sectors, already think this way. “Their only asset is data,” he says. “That’s why they can promise 15-minute delivery in Dubai traffic. You can’t do that without deep data understanding.”
Kiley warns that the real costs of agentic AI don’t appear until after deployment. “Agentic AI should be budgeted like a living system, not a one-time IT project.”
He recommends a three-phase budgeting model: around 25% of spending on launch activities such as model design and data integration; 35% on operationalization, including monitoring, fine-tuning, and human oversight; and 40% on scaling and evolution, from security hardening to continuous performance optimization. “This market evolves so fast that it would be negligent not to continually evaluate and route workloads to newer, more efficient models,” he notes.
Adding that metadata is the unsung hero of sustainability, Qureshi says, “Information about the package is as important as the package itself.” Without metadata, you can’t audit or explain decisions, which becomes critical when agents start taking autonomous actions.
The First Line of Defense
Khalife says that governance is both an enabler and a safeguard. “Agents aren’t deterministic,” he explains. “You have to control both what goes in and what comes out, or you risk data leakage or compliance failures.” Forward-thinking enterprises, he adds, are already utilizing machine learning to automatically detect and block sensitive data. “It’s not just about accuracy; it’s about safety and accountability.”
The Gulf’s innovation culture prizes speed, but experts warn that speed without control is a recipe for attrition. “The biggest trade-off in agentic systems is speed versus governance. You have to move fast because the tech evolves monthly, but you’re dealing with systems that can act unpredictably. Innovation must coexist with oversight,” Khalife says.
Qureshi refers to this approach as “controlled innovation.” In his words, “Every enterprise has three goals: make money, save money, and stay out of jail. Responsible innovation means achieving all three.”
Retraining For Higher-Value Work
When asked if agentic AI will cause job losses, Qureshi reframes the issue. “Attrition will happen, but it’s the attrition of roles, not people,” he says. The challenge, he adds, is retraining for higher-value work, not defending obsolete processes. He likens resistance to AI to defending handwriting in the age of the printing press: “AI is the modern printing press. We shouldn’t defend people who still want to copy books by hand.”
Across the Gulf, early adopters are demonstrating that disciplined governance yields significant benefits. Khalife cites a regional bank that tied its agentic rollout directly to portfolio rebalancing goals, unifying data, clarifying decision logic, and embedding governance from the outset, achieving an ROI in just 12 months.
Qureshi points to a major aviation group and several gig-economy firms using agentic workflows at scale. “They’re operating in a model of controlled innovation,” he says. “Not reckless experimentation, but sustainable, compliant progress.”
A Playbook for Gulf CIOs and Regulators
The roadmap for Gulf enterprises aiming to leverage agentic AI as a sustainable competitive advantage starts with a clear focus. Projects that begin with well-defined, high-value use cases are much more likely to yield measurable business outcomes compared to those with broad or unclear objectives.
It is equally important to establish unified governance and ownership for AI initiatives. Successful organizations designate a single accountable leader responsible for both the success and risk management of these initiatives. Governance should not be viewed as a compliance afterthought; instead, it should be integrated into the system’s architecture—incorporated into the design, observability, and oversight processes.
Budgeting must also evolve beyond the launch phase. As Kiley says, the real costs of sustainability and scalability often emerge later, with as much as 40% of the total investment required for ongoing optimization, security hardening, and performance evolution. Data centralization of both data and metadata remains a foundation for transparency, interoperability, and trust.
Ultimately, success hinges on striking a balance: maintaining the velocity necessary for innovation while enforcing disciplined, secure checkpoints. It also demands investment in people. The next generation of Gulf enterprises will be those that reskill employees to collaborate with AI systems.
From Hype to Maturity
Gartner’s prediction that more than 40% of agentic AI projects will be scrapped by 2027 is a test of maturity. As Khalife puts it, “Many first attempts will fail, and that’s healthy. The market will mature around the right use cases, and success rates will rise.”
Kiley states that AI should be regarded as a “living system that learns and evolves.” Meanwhile, Qureshi emphasizes that there will be three types of companies: those that are data-driven, those that are in the process of becoming data-driven, and those that will go bankrupt.
For the Gulf’s AI leaders, the challenge is clear: govern smarter, scale responsibly, and invest for the long haul. The winners of the agentic era won’t be those who build the fastest systems; they’ll be the ones who build them to last.
