More than Meet the AI 970x250

The Real AI Bottleneck Isn’t Models—It’s Trust

Governments racing toward agentic AI are finding that the real challenge is data credibility, not computing power, talent, or ambition.

Reading Time: 6 Min 

Topics

  • [Image source: Chetan Jha/MITSMR Middle East]

    For years, the AI strategy has focused on scaling by building larger models, using faster chips, and hiring more talent. Governments, like enterprises, have invested heavily in cloud infrastructure and experimentation, believing that technological sophistication would automatically yield results.

    Yet across the public sector, AI adoption continues to stall at the pilot stage. The reason is not a lack of algorithms. It is a lack of trust, specifically, trust in data.

    “Much of the global AI conversation is still confused,” says Amit Walia, CEO of Informatica. “What people see in the headlines is consumer AI—models trained on internet data, where individuals voluntarily give information away. Enterprise and government AI are fundamentally different. No government is going to say, ‘Here’s my data; put it into a public model and do whatever you want with it.’”

    That distinction explains the growing gap between what AI can do and what governments are willing to let it do.

    Why Consumer AI Leaps Ahead and Government AI Lags

    Consumer AI works well with a lot of informal, unstructured data, where permissions are flexible, and errors are tolerated. In enterprise and government environments, the opposite is true: privacy, sovereignty, accuracy, and accountability are non-negotiable.

    “That’s why AI adoption in the consumer world is far ahead of enterprise and government,” Walia explains. “Once you move AI inside institutions, everything changes: governance, quality, lineage, control. Without those, AI simply cannot scale responsibly.”

    This is why, despite impressive demonstrations, most government AI projects never move into production. Proofs of concept generate excitement, but scaling up is much harder because it requires integrating fragmented data from different ministries, agencies, and legacy systems.

    In government, data is not merely a technical issue; it is also an institutional one.

    The Moment AI Hits Reality

    The limits of low-trust data become obvious the moment AI systems are expected to operate with real-world consequences.

    “If you deploy an AI agent in customer service, and it doesn’t understand who you are, your history, your eligibility, your status, you’re going to be unhappy,” Walia says. “That same principle applies to governments, only the stakes are much higher.”

    Agentic AI, systems capable of acting autonomously across workflows, raises those stakes exponentially. An AI agent approving permits, allocating benefits, or flagging compliance risks cannot rely on partial or contradictory data. Every inconsistency becomes a potential policy failure.

    “Bad data in will always produce bad outcomes,” Walia notes. “Enterprises and governments realized very quickly after experimenting that without fixing the data foundation first, the models deliver very little value.”

    The result is a paradox: as AI gets more powerful, it reveals more weaknesses in the data it relies on.

    Data Trust as the True Limiting Factor

    Trusted data is often misunderstood as a hygiene issue—something to clean up after innovation begins. In reality, it is the prerequisite for innovation at scale.

    At a sovereign level, trust means something very specific: the ability to make consequential decisions with confidence.

    “Trusted data means I can make a real decision based on what I’m seeing,” Walia explains. “Can I send you a tax bill? Can I issue a refund? Can I let you cross a border? Can I personalize a service without getting it wrong?”

    The challenge is structural. Citizen data is inherently fragmented, spread across hundreds of systems, recorded in different formats, and updated at different times. Without standardization, lineage, and governance, AI systems are forced to infer certainty where none exists.

    That is not intelligence. It is a risk.

    Qatar’s Strategic Reversal: Start With Authority, Not Algorithms

    Qatar’s approach to AI reflects a growing understanding that real intelligence begins with authority.

    Rather than rushing toward autonomous systems, the country has prioritized building a common, authoritative data foundation across government; one designed to serve as a shared source of truth for both humans and machines.

    “What stood out immediately,” Walia says, “was the vision to create a common data foundation for all government entities—not one agency at a time, but a platform everyone could build on.”

    This is a structural departure from how most governments modernize. Instead of centralizing only policies or standards, Qatar is bringing both governance and infrastructure together, enabling a unified view of citizens, assets, and services.

    “That combination is rare,” Walia notes. “Many countries govern data centrally, but leave infrastructure fragmented. Qatar is doing both, and that changes what’s possible.”

    The result is not just efficiency, but interoperability by design. Ministries don’t have to sort out conflicting data later; they all use the same trusted information from the start.

    From Governance as Constraint to Governance as Enabler

    One of the most persistent myths in AI adoption is that governance slows down innovation. In reality, it’s poor governance that causes delays.

    “Governance has terrible branding,” Walia admits. “It sounds like control, like ‘big brother.’ Nobody wants that.”

    The solution, he argues, is proportionality. Governance should scale with risk and maturity; lightweight during experimentation, rigorous in production.

    “In AI, governance must grow as adoption grows,” he says. “You don’t use a big hammer on a small pilot. But without governance at scale, AI becomes dangerous.”

    Seen this way, governance isn’t a barrier to innovation; it is the mechanism that allows innovation to move from pilot to real use without losing public trust.

    Why the Middle East Is Moving Faster Than Expected

    Historically, the public sector has lagged behind enterprises in technology adoption. Yet in the Middle East, that pattern is shifting.

    “I’m genuinely impressed by the willingness of governments here to move fast,” Walia says. “They’re open to experimenting, open to failing, and open to learning. That’s not something you hear often about governments.”

    In Qatar’s case, that urgency is paired with long-term thinking. Leaders recognize that AI transformation is not a single project but an operating model that requires sustained investment in data platforms, skills, and institutional alignment.

    “They understand that data is the foundation,” Walia adds. “If you don’t get that right, AI will never deliver the experience or outcomes you expect.”

    The Work That Makes Intelligence Possible

    Every wave of technological transformation has a moment when ambition collides with institutional reality. For AI in government, that moment has arrived.

    The past decade was about imagining what AI could do. The next will be about deciding what it should be allowed to do and under what conditions. That shift moves the center of gravity away from algorithms and toward something far less visible, but far more determinative: the credibility of the data that underpins public decisions.

    In high-trust environments, humans resolve ambiguity through judgment, experience, and informal checks. Machines cannot. They require clarity where governments have historically tolerated inconsistency. They require authority where institutions have relied on interpretation. In this sense, AI does not merely automate government; it exposes it.

    Qatar’s approach reflects an understanding that intelligence at scale is not a software problem but a statecraft problem. Before delegating judgment to machines, the government first does the harder work of aligning itself—defining authoritative sources, standardizing meaning, and building shared foundations across ministries that were never designed to think as a single system.

    This work is slow and largely invisible. It does not generate headlines. But it is the difference between AI that assists and AI that can be trusted to act.

    As agentic systems move from theory to deployment, governments will face a simple but uncomfortable truth: AI will only ever be as coherent as the state behind it. Models can reason. Chips can accelerate. Talent can innovate. But only trusted data can legitimize machine-led action in the public sphere.

    The real divide in government AI will not be between early adopters and laggards. It will be between those willing to do the unglamorous work of building trust and those who try to scale intelligence without it.

    Ultimately, the question isn’t whether governments are ready for AI. It is whether their data is ready to speak with one voice.

    MIT Sloan Management Review Middle East will host the GovTech Conclave 2026, themed “Re-architecting Governance for a New Digital Order,” on April 21, 2026, in Abu Dhabi, UAE.

    To speak, partner, or sponsor, register here.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.