What’s Your Edge? Rethinking Expertise in the Age of AI

As AI democratizes access to information, leaders should revisit ideas about hierarchy and roles — and focus on their team’s judgment and synthesis skills.

Reading Time: 10 minutes 

Topics

  • A CEO recently posed a question to me that’s been keeping executives awake: “If my junior analyst can get the same AI-generated insights as my senior strategist, why am I paying for expertise?”

    It’s not hyperbole to say that we’re witnessing an unprecedented democratization of knowledge. Information that was once locked in specialized databases, consulting reports, and expert minds is now instantly available to anyone with access to generative AI and artificial intelligence tools. A startup founder in Indonesia can access strategic frameworks that once required McKinsey consultants. A nurse practitioner in rural Kansas can synthesize medical research like a specialist at Mayo Clinic.

    This isn’t simply another wave of automation; it’s a fundamental restructuring of knowledge itself. Organizations that misunderstand this shift face two risks: overpaying for outdated expertise and undervaluing the human capabilities that remain irreplaceable.

    The Paradox of Abundant Knowledge

    When knowledge becomes commoditized, its value paradoxically shifts from the content to the context. Consider three critical transformations.

    • From answers to questions: AI excels at providing comprehensive answers, but only to the questions that we know to ask. The most valuable human expertise increasingly lies in identifying unasked questions and recognizing that there are unknown unknowns. A seasoned strategist understands not only their industry’s current patterns but also its hidden assumptions and unexplored adjacencies — the white spaces that don’t yet exist in any AI model’s training data.
    • From information to judgment: While AI can instantly synthesize vast amounts of information, it cannot bear the weight of consequences. When an AI system recommends restructuring your organization’s supply chain or entering a new market, the accountability remains entirely human. This gap between intelligence and responsibility creates an irreplaceable role for human judgment. Leaders aren’t paid because they can access information; they’re paid to make decisions when the stakes are real and the outcomes are uncertain.
    • From static knowledge to liquid knowledge: Traditional knowledge management has treated information as a fixed asset to be stored and retrieved from knowledge repositories. But AI reveals knowledge dynamically, reshaping it based on the context, user, and moment. Each prompt generates a unique knowledge artifact tailored to specific needs. This shift from static knowledge to liquid knowledge fundamentally changes how organizations should think about subject matter expertise.

    The most valuable human expertise increasingly lies in identifying unasked questions and recognizing that there are unknown unknowns.

    The Cognitive Outsourcing Trap

    The accessibility of AI tools like ChatGPT creates a subtle but serious risk: cognitive atrophy. We’ve seen this pattern before. GPS navigation eroded our spatial memory. Calculators diminished our capacity to perform mental arithmetic. But those were specific skills. Now we risk outsourcing human thinking itself.

    Research from the University of Toronto found that using generative AI systems reduces humans’ ability to think creatively, resulting in more homogeneous ideas and fewer truly innovative ones.1 Other studies have shown that GenAI tools reduce the perceived effort required for critical-thinking tasks, with workers increasingly relying on AI for routine decisions. This raises concerns about long-term cognitive decline and diminished problem-solving capabilities.2

    More concerning is the homogenization of thought. When millions of people pose similar questions and receive similar AI-generated answers, we risk intellectual convergence — a flattening of the diverse, chaotic thinking that drives innovation. Three students in my class recently submitted nearly identical AI-generated architecture proposals for their projects. Efficient? Yes. Creative? No.

    The New Competitive Advantage: Meta-Expertise

    Rather than making human expertise obsolete, AI is elevating what expertise means. An IESE Business School study that analyzed U.S. job postings between 2010 and 2022 found that for every percentage point increase in AI adoption at a company, there was a 2.5% to 7.5% increase in demand for management roles, with the positions emphasizing judgment and cognitive and interpersonal skills. The most valuable professionals are developing what I call meta-expertise. This is the ability to orchestrate knowledge from multiple AI systems, validate outputs, and synthesize information across domains. This requires three distinct capabilities that AI cannot replicate.

    1. Creative synthesis. While AI excels at pattern recognition within existing data, breakthrough innovation comes from connecting seemingly unrelated ideas. When a pharmaceutical researcher sees a connection between a butterfly’s wing structures and drug delivery mechanisms, or an architect applies jazz improvisation principles to planning smart buildings, the creative leaps represent uniquely human cognition.

    2. Contextual wisdom. The intuitive understanding humans have built through years of experience remains difficult to codify and transfer to AI systems. The experienced plant manager who senses equipment problems before sensors detect them, or the sales director who discerns unspoken client concerns, possesses contextual wisdom that transcends data patterns.

    3. Ethical navigation. As AI handles more analytical work, human expertise must increasingly focus on ethical judgment, cultural sensitivity, and stakeholder management. These aren’t edge-case skills; they are central to every significant business decision. The ability to navigate competing interests, understand unspoken cultural norms, and make principled decisions under pressure remains fundamentally human.

    Talent and Learning Principles to Rethink

    Organizations are beginning to make structural changes to capture value from AI, with larger companies leading the way in redesigning workflows and putting senior leaders in critical AI governance roles, McKinsey reports.
    Leaders should rethink their talent strategies around three principles.

    1. Redefine Role Hierarchies

    Traditional hierarchies based on information access are becoming obsolete. There are increasing cases of companies redefining their role hierarchies as they incorporate AI, including professional services firms like AccentureCognizant, and EY, as well as tech giants. The shift, which some observers call “the great flattening,” involves eliminating layers of middle management and augmenting existing roles with AI.
    The goal is to have AI automate routine tasks that used to be performed by lower-level employees and managers and enable senior staff members to focus on higher-value, strategic work. Your senior strategist’s value isn’t in knowing frameworks anymore. Their value lies in knowing which framework to apply when, how to adapt them to particular contexts, and when to abandon frameworks entirely for out-of-the-box human thinking.
    For example, EY has committed $1.4 billion to an AI transformation that it describes as “human-centered.” The company is redefining its internal functions and launching extensive upskilling programs for its 400,000 employees. The training provides foundational AI literacy to every employee and advanced master classes to leaders. By embedding AI into the core of its strategy and democratizing access to AI knowledge through its EY.ai platform, the firm aims to empower employees to move toward higher-value work, close the skills gap, and ultimately reshape roles.
    On the tech side, Amazon is removing some middle-management layers from its structure. CEO Andy Jassy aims to flatten the organization, decrease bureaucracy, and drive decision-making closer to the front lines while using AI to automate tasks.

    2. Invest in Cognitive Sovereignty

    Organizations must deliberately preserve and strengthen human thinking capabilities. While documented cases of “AI-free zones” remain scarce in practice, research on cognitive decline from AI overuse suggests that it could be a valuable approach.3
    Companies should consider forward-looking moves such as:
    • •Mandating that strategic proposals include sections developed through human analysis.
    • •Implementing “human thinking sprints,” where teams solve problems without AI assistance.
    • •Inserting deliberate friction in certain organizational processes, like procurement, to test the cognitive fitness of employees.
    Just as physical training encourages muscle memory, these exercises could help employees maintain the cognitive capabilities that differentiate human intelligence.

    3. Develop AI Orchestration Capabilities

    Job postings for AI operations roles have increased 230% in recent months, with companies seeking professionals who can design entire workflows that integrate AI and human capabilities. Some of these emerging roles are called AI operations lead, AI orchestrator, or agent orchestration engineer. The people filling these roles are expected to act as bridges between human creativity and machine intelligence.
    Yet hiring AI-savvy talent is only part of the solution. As any CIO will attest, the real challenge is in figuring out how to weave AI tools into human workflows. Successfully navigating this complexity will require seasoned practitioners with contextual expertise in technology and business domains.
    After all, the key is understanding when to deploy AI, human judgment, or both, recognizing that adding AI in does not always improve the value.

    4. Rethink Learning Programs

    This shift of knowledge work fundamentally challenges traditional education and professional development models. Why do we still need advanced degrees when AI can synthesize expert knowledge instantly? Higher education teaches the art of knowledge creation: how fields establish truth, evolve understanding, and challenge paradigms. This meta-expertise will become more critical for humans as information becomes ubiquitous.
    Organizations should ensure that corporate learning programs emphasize three skill sets.
    • Critical evaluation: Teaching professionals to assess AI outputs, identify biases, and recognize limitations.
    • Creative application: Developing skills to frame problems in novel ways and make cross-domain connections.
    • Ethical reasoning: Building capacity for moral judgment and stakeholder balance.

    The Path Forward: Thoughtful Augmentation

    Recent studies have found that generative AI technologies can outperform human CEOs in data-driven strategic tasks but fail when handling unpredictable, first-of-its-kind disruptions.4 This illustrates AI’s promise and limitations: Large language models are exceptional at pattern recognition and optimization but unable to navigate uncertainty or bear accountability for outcomes.

    Then there’s the matter of innovation. Research conducted at Google identified psychological safety, not technical skills, as the single biggest distinction between innovative and non-innovative teams. This suggests that as AI handles more technical work, the human elements of trust, creativity, and collaboration become even more vital for success.

    As AI handles more technical work, the human elements of trust, creativity, and collaboration become even more vital for success.

    The organizations that will thrive are the ones that don’t bet entirely on AI or stubbornly preserve traditional approaches. Success lies in the thoughtful augmentation of AI for recognizing patterns, synthesizing data, and generating options while leaving the creative leaps, ethical decisions, and accountability-bearing choices to humans.

    For leaders, this requires deliberate choices about cognitive sovereignty. The convenience of access to instant AI answers shouldn’t eliminate the creative struggle of human thinking. Sometimes the most strategic decision is to sit with the discomfort of uncertainty rather than immediately querying an AI tool.

    Concrete Actions Leaders Should Take

    If you’re a leader who is on board with the above plan, what actions should be on the immediate and long-term to-do list for your team?
    Additionally, how can you tell if AI augmentation is being managed appropriately by your colleagues or trouble is brewing? Here are some steps to plan on — and red flags to monitor:

    Immediate Steps

    • Audit current roles to identify where AI augmentation versus human judgment alone adds value.
    • Create deliberate practices that preserve human thinking capabilities.
    • Establish clear accountability frameworks that maintain human responsibility for AI-assisted decisions.

    Long-Term Strategies

    • Redesign career paths around meta-expertise development rather than information accumulation.
    • Build cross-functional teams that combine AI orchestration with functional domain expertise.
    • Invest in continuous learning programs focused on creative synthesis and ethical reasoning.

    Warning Signs to Monitor

    • Increasing homogeneity in creative proposals.
    • Overrelying on AI for routine decisions without human review.
    • Seeing the ability of employees to work without AI assistance decline.
    • Losing tribal knowledge as employees stop developing deep expertise.

    The Courage to Remain Human

    As AI capabilities expand, the ultimate competitive advantage may be the courage to remain cognitively sovereign. This means deliberately preserving and cultivating uniquely human capabilities, even when outsourcing them would be more efficient at certain times.
    The question facing leaders isn’t whether human expertise remains relevant in the AI age. It’s whether organizations will thoughtfully cultivate the uniquely human capabilities that no algorithm can replicate — the weight of accountability, the spark of creativity, and the wisdom to know which questions shouldn’t be outsourced to machines.
    The companies that navigate this challenge successfully won’t just survive the AI revolution. They’ll define what human-centered innovation looks like in an age of ubiquitous intelligence.

    References

    1. A.R. Doshi and O.P. Hauser, “Generative AI Enhances Individual Creativity but Reduces the Collective Diversity of Novel Content,” Science Advances 10, no. 28 (July 12, 2024): 1-9, https://doi.org/10.1126/sciadv.adn5290.
    2. H.-P. Lee, A. Sarkar, L. Tankelevitch, et al., “The Impact of GenAI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers,” in “CHI ’25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems” (Association for Computing Machinery, 2025): 1-22, https://doi.org/10.1145/3706598.3713778.
    3. A.S. George, T. Baskar, and P.B. Srikaanth. “The Erosion of Cognitive Skills in the Technological Age: How Reliance on Technology Impacts Critical Thinking, Problem-Solving, and Creativity,” Partners Universal Innovative Research Publication 2, no. 3 (May-June 2024): 147-163, https://doi.org/10.5281/zenodo.11671150.
    4. H. Mudassir, K. Munir, S. Ansari, et al., “AI Can (Mostly) Outperform Human CEOs,” Harvard Business Review, Sept. 26, 2024, https://hbr.org.

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.