More than Meet the AI 970x250

Why AI Is Forcing Governments to Rethink Power, Control, and Trust

“Effective governance in the intelligent age is not about replacing human judgment. It’s about deliberately re-architecting where judgment sits,” says Kelly Ommundsen, Head of Digital Inclusion at WEF.

Topics

  • Over the past decade, governments around the world have become increasingly insistent that the internet, and by extension, digital technologies should exist beyond the regulatory reach. What began as a few cautious interventions has become, as legal scholar Anu Bradford calls it, a “cascade of regulation.” In her book Digital Empires, she describes growing battles between governments and technology companies—and among regulators themselves—that will shape the values of the digital society and the future of the digital economy.

    These conflicts are not new. States have long grappled with how to govern technologies that are transnational by design and resistant to traditional jurisdictions. But generative AI has dramatically accelerated the pace and the stakes of this struggle. While the digitization of government services has been underway for years, the rise of large-scale, generative models has pushed governments into an era of AI-fied governance.

    To understand what this shift means in practice, we spoke with Kelly Ommundsen, Head of Digital Inclusion at the World Economic Forum, about how governments can deploy AI not just efficiently, but legitimately, without hollowing out human judgment or public trust.

    Rethinking the Human–AI Relationship

    Few worries have dominated the public’s attention as much as the fear that AI will cause mass unemployment or take over human decision-making. But Ommundsen says this view misses a more profound transformation underway.

    “Effective governance in the intelligent age is not about replacing human judgment,” she says. “It’s about deliberately re-architecting where judgment sits.”

    As AI systems increasingly automate or assist decisions—whether in welfare allocation, licensing, tax enforcement, or risk assessment—the role of government begins to shift. Rather than making every decision directly, governments are designing systems that guide these choices, including incentives, guardrails, and moments when people need to step in.

    Like any other shift, this one, too, carries risks if not handled with care. Rushing into the deployment of generative AI without the required safeguards can amplify the many, many existing digital problems. Numerous online examples and studies show that AI models are prone to bias, struggle with factual accuracy, require substantial computational resources, and often operate as opaque systems whose inner logic is largely unknown.

    “Well-governed AI systems do not operate as autonomous black boxes,” Ommundsen stresses. Instead, they must be embedded within transparent chains of accountability. Humans must set objectives, define acceptable trade-offs, and retain authority over consequential outcomes.

    “In that sense,” she adds, “AI’s value lies not in substituting judgment, but in absorbing complexity, reducing friction, and expanding the range of viable choices available to decision-makers.” The key issue is the difference between delegating and giving up responsibility, which is central to the challenge of governing AI. 

    Control, Accountability, and the Limits of Automation

    By now, the shortcomings of AI systems are well documented. In short: models hallucinate. They make up legal precedents, flag false positives, and reproduce social stereotypes embedded in their training data. In public-sector contexts, these failures can result in denied benefits, unlawful surveillance, or institutionalized discrimination.

    So how can governments responsibly deploy such technologies for the public? Ommundsen says the key is not speed or complexity, but control. “The real measure of effective governance,” she says, “is not whether AI enables faster decisions, but whether governments retain meaningful control over how those decisions are made.”

    In the age of intelligence, trust does not come from the promise of perfection. It comes from knowing that power is governed, accountability is clear, and humans remain ultimately responsible for outcomes (even when machines are involved). This reframing is critical as it pushes governments to confront uncomfortable questions about control, authority, and responsibility.

    Debates over control have surrounded the internet since its inception.

    Some argue that its decentralized design means that any government regulation would conflict with its core values. 

    Ommundsen challenges this binary. She suggests we see regulation not as something that slows innovation, but as a kind of infrastructure.

    “Treating regulation as infrastructure means moving away from regulation as a one-off intervention,” she explains, “and toward regulation as a durable, enabling system that is designed to be used, updated, and relied upon over time.”

    In practical terms, this means abandoning static, technology-specific rules in favor of adaptable frameworks. Rather than prescribing rigid requirements for each new technological development, governments should focus on clear principles and shared standards.

    More than a thousand AI policy initiatives across 69 countries have been documented in recent years. Many of them reflect this infrastructural turn: regulatory sandboxes, shared testing environments, and continuous oversight mechanisms that evolve alongside the technologies they govern.

    “For AI specifically,” Ommundsen notes, “this also means embedding values like safety, fairness, and accountability directly into technical and procurement standards, rather than trying to enforce them afterward.”

    Laws are often blunt tools. Even as consensus grows around the need for governance, policymakers, businesses, and citizens remain frustrated by regulation’s limitations. But when regulation is treated as infrastructure rather than obstruction, it becomes a condition for scale.

    Ommundsen says, “When regulation works this way, it stops being a brake on innovation and becomes the thing that makes innovation sustainable.”

    Closing the Loop

    One popular idea in AI is the use of “in-the-loop” systems, such as human-in-the-loop, feedback-in-the-loop, and model-in-the-loop. While these approaches focus on different things, they all assume that oversight must be ongoing, not episodic.

    Ommundsen notes that many of these systems fall short because they conflate data collection with real participation. Governments collect vast amounts of feedback from citizens, but often this input does not reach decision-making authorities. Feedback is gathered, but not used. Consultation becomes a ritual rather than a mechanism for change.

    “The most effective feedback systems are not surveys or comment boxes,” she says. “They’re loops that actually close.” 

    Ommundsen suggests that governments need to move from just consulting citizens to continuous listening. Feedback should be embedded into how services operate and evolve. In such systems, citizen input is not merely acknowledged; it leads to real improvements.

    This distinction is especially important in contexts where digital governance is experienced primarily as surveillance. Participation cannot coexist with systems that treat citizens as mere data points. “Real participation happens,” Ommundsen says, “when citizens stop being surveyed and start being heard. When that shift registers with everyday citizens, trust stops being something you ask for—and becomes something you earn, in compound interest.”

    Even with years of learning, many governments still act only after problems arise, rather than preventing them.

    “Governments need to accept that uncertainty is permanent,” Ommundsen says, “and design systems that can adjust without starting from scratch.”

    Anticipatory regulation often fails when it becomes an exercise in futurism—predicting technological trajectories and legislating in advance. The real task is building mechanisms that can spot early signs of problems, test responses, and course-correct before crises emerge. 

    In practice, this means working more closely with industry and academia. It also requires political courage: leaders must be willing to experiment, pause, and make changes without seeing every adjustment as a failure. 

    Early signs of this approach are evident in initiatives like the World Economic Forum’s Global Regulatory Innovation Platform (GRIP), where governments are experimenting with sandboxes, outcome-based regulation, and staged authorizations. These models are gaining traction in sectors such as digital finance, healthcare, and AI.

    “The common thread is a willingness to regulate with innovation rather than perpetually chasing it from behind,” she says. 

    What Real GovTech Transformation Looks Like

    Governments around the world increasingly speak of GovTech’s potential to unlock public value. But not all digital reforms are created equal. “Digitization is putting the same form online,” Ommundsen says. “Transformation is questioning why the form exists at all.”

    Many GovTech initiatives simply replicate analogue processes into digital formats, which she describes as “elaborate photocopiers.” They replace queues with portals and paper with PDFs, improving convenience without altering how governments work at their core.

    Transformative governments do something different. They enable data to reshape service models. They treat technology teams as policy partners rather than IT vendors. And they ask harder questions: Why does this service exist? Who does it serve? Could it be designed out altogether?

    This distinction is increasingly evident in the Global GovTech Intelligence Hub, which documents cases in which governments have moved beyond surface-level digitization to rethink institutional incentives and governance models. Across these examples, success is not driven solely by better software, but by better questions about data, design, and institutional incentives.

    Designing for Inclusion

    A persistent myth in technology policy is that inclusion will follow innovation if given enough time. Ommundsen rejects this outright. “Inclusion does not happen by default; it has to be engineered,” she says. The perceived trade-off between speed and equity, she argues, is a false one—born of systems designed for early adopters, with the hope that marginalized users will eventually catch up. They rarely do. Instead, gaps magnify.

    The better approach is to make universal access a technical requirement from the outset. India’s Aadhaar system is an example. Its biometric authentication was designed to work for people with low literacy and no fixed addresses, making equity a core part of the technology. As a result, about 95 percent of the population, or 1.38 billion people, enrolled.

    Designing for the edges, Ommundsen notes, often improves systems for everyone. Users facing poor connectivity, limited literacy, or disabilities stress-test services in ways that reveal hidden assumptions and vulnerabilities. When inclusion is treated not as an afterthought but as a design constraint, speed and equity cease to be opposing forces. They become mutually reinforcing.

    In the grand scheme of things, the question is no longer whether governments will use AI but whether they can do so in ways that preserve judgment, accountability, and the public’s trust. The intelligent age does not demand less governance. It requires a governance designed for uncertainty, grounded in values, and willing to listen and not just surveil.


    Kelly Ommundsen will be speaking at MIT Sloan Management Review Middle East’s GovTech Conclave 2026, themed “Re-architecting Governance for a New Digital Order,” on April 21, 2026, in Abu Dhabi, UAE. 

    To speak, partner, or sponsor, register here.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.