In an AI-Generated Internet, Human Judgment Becomes a Competitive Advantage

Generative AI has scaled content production, making verification difficult. Leaders who treat this as a technology problem will address the wrong variable.

Topics

  • According to Stanford’s 2026 AI Index report, today, more than half the internet’s traffic is no longer human. Built as an open information superhighway, the web has long grappled with structural strains—from economic pressures and platform concentration to the persistent influence of bad actors. Now, artificial intelligence is emerging as a powerful new force reshaping how information is created, distributed, and consumed, raising fresh questions about its net impact on the digital ecosystem. 

    Organizations now face a structural dilemma: scale information access through AI-generated outputs or preserve systems of verification that sustain credibility. Most optimize for speed and engagement, underestimating how quickly trust erodes when users cannot trace or validate information. The competitive risk is misinformation and the degradation of trust as an institutional asset.

    Take Google’s search engine as a case study. Many of us spent years relying on those first ten blue links in search results, trusting them almost completely.

    In 2024, with AI Overviews, Google reversed that. The company collapsed multiple sources into a single prioritized answer synthesis. It shifted users from verified sources to passive consumers of information. 

    The systemic shift carries consequences. Since users no longer click through to original reporting, research and analysis, the incentive to produce high-quality verifiable content begins to erode. The very material which feeds these AI systems risks thinning out. 

    While this improved immediacy, it introduced new risks: altered meaning, reduced transparency, and weakened incentives for original content creation. What began as a product improvement quickly exposed a deeper risk: if users no longer engage with verifiable sources, what sustains trust in the monumental system?

    The internet is flooded with AI-generated text, images, videos, and podcasts. Whether you like it or not, you cannot escape it. Even if we disregard the societal repercussions of generative AI (which we shouldn’t), it has left many platforms and communities in a graveyard of archived forums, memories, and knowledge. Once users lose confidence in the reliability of information, sustaining engagement becomes harder. 

    The new AI tools aren’t fully dependable. The tech giant sometimes changes headlines, which can distort the author’s original meaning. Google AI Overviews has its own set of problems, and the constant changes in SEO policy impacting rankings are a different story altogether. 

    This dilemma can be understood through a simple operating model built on two variables: how quickly content is generated and delivered, and how rigorously that content is sourced, traced, and reviewed. Together, these dimensions define the range of choices organizations make when deploying AI systems.

    At one end of the spectrum are high-velocity, low-verification systems—platforms optimized for scale, where content is produced rapidly but with limited transparency or oversight. These systems maximize efficiency but remain vulnerable to inaccuracies and declining user trust. At the other end are low-velocity, high-verification models, where human oversight, sourcing, and review are central. These systems prioritize credibility, though often at the cost of speed and scale.

    The verification–velocity matrix

    Low verification High verification
    High velocity Maximizes output but accumulates trust risk over time. The dominant current mode for most organizations. Target position. Requires deliberate architecture: provenance tagging, structured sourcing, human oversight at scale. Sustainable competitive advantage.
    Low velocity Strategic drift. Neither efficient nor credible. Reached by neglect, not design. Traditional credibility model. Sustainable for high-trust, low-volume domains. Difficult to scale under competitive pressure.

    Between these extremes lies a more complex middle ground. High-velocity, high-verification systems attempt to combine automation with structured oversight, embedding traceability and accountability into fast-moving environments. By contrast, low-velocity, low-verification ecosystems tend to generate limited value, lacking both efficiency and reliability.

    Most organizations today are moving toward high-speed, low-verification models, driven by competitive pressure to scale AI capabilities quickly. Whether that position is sustainable depends on how effectively they can restore verification without sacrificing the advantages of speed.

    In 2023, Canadian author Cory Doctorow coined the term enshittification to highlight the phenomenon of a decaying platform. The following year, it became the “word of the year.” Defined as “the gradual deterioration of a service or product brought about by a reduction in the quality of service provided, especially on an online platform, and as a consequence of profit-seeking,” the word instantly became a favorite among writers and researchers as it justified where the internet was headed. 

    “We’re coming out of this Web 2.0 era where we actually relied upon humans to do stuff, like forums, and social media,” points out Chris Albon, Senior Director of Machine Learning at the Wikimedia Foundation. 

    Institutionalizing Verification

    Long before the rise of generative AI, Wikipedia, operated by the Wikimedia Foundation, embedded verification into its core design. Every claim was supported by a cited source that could be independently checked. Content that did not meet this standard was removed through community review.

    This model has proven resilient.

    Albon points to the rule the foundation has been following for over 20 years: “Everything that is added to the site needs to have a verifiable source—a link that one can click through. If there is some kind of slop that’s put in, it’s quickly identified by other editors and [deleted].”

    While Wikipedia has denounced AI sloppification, its editors use these tools. “We make smaller models, the purpose of which is to help editors do their work, like identify vandalism, classify articles, or suggest edits. The communities are trying to figure out where the lines are,” notes Albon. The fight is against poorly generated material that overpowers the good matter. As per the latest policy update, the online encyclopedia has banned the use of LLMs to generate or rewrite its content. 

    He adds, “If you don’t have that, you have to fight it some other way.” 

    Similarly, the Mozilla Foundation is advancing a human-centered AI framework that emphasizes provenance, creative ownership, and collective oversight—highlighting an alternative governance model to purely scale-driven systems.

    “AI will shape culture, but the choice is whether it flattens it or helps it flourish,” says Ziyaad Bhorat, Senior Advisor, AI Ecosystem Strategy at Mozilla Foundation. “Creatives told us clearly: the future of art must be built on process, purpose, and human lineage, not just speed and scale.” The foundation holds that technology should serve humanity’s imagination, not erode it. The foundation also believes in collective work, as does the former.

    The Imaginative Intelligences Assemblies project, formed by 91 creatives, technologists, and thinkers in collaboration with the Berggruen Institute, a think tank, emphasizes a call to creative expression that intentionally places humans at the core. 

    There has been a pattern of people collectively achieving clarity in this online void. People are relying on others who think alike to ensure the internet continues to have corners where human intellect thrives alongside AI.

    There has been ongoing anxiety about how to distinguish AI-generated content from human-created material. The fun phase, where everything felt like a puzzle to be solved, lasted a while, and people were pointing out hands with an unusual number of fingers or cars traveling in reverse. However, as these models improved, the fun turned into a nightmare, as it became much harder to spot the difference.

    It’s exhausting to always question what you see online. Many online communities, especially on Reddit, are now asking the same thing: Is this AI? 

    The Behavioral Risk

    Albon believes there will be greater emphasis on islands of humanity in a big sea of slop. “Where there’s a human at the other end of the conversation, those will be increasingly valued as spaces,” he says. 

    Some researchers, developers, and creatives still refuse to use AI. They push back against its spread by creating spaces for discussion and critical thinking. Professor Emily M Bender and Dr. Alex Hanna are among those who warn about the effects of AI on our digital lives and reject its social costs.

    “It’s like getting a Post-it note with the answer when you have a book placed in front of you. As a reader, you kind of know which one you want,” says Albon. This applies to trivial questions, but people are dependent on these AI tools for guidance on everything that has gone wrong in multiple instances.

    According to a BBC study, even the most advanced AI chatbots gave wrong answers a whopping 45% of the time. But many users don’t grasp this. When researchers tested a key theory: whether users would be willing to believe what the AI was telling them, regardless of accuracy, they found a “cognitive surrender” that effectively overrode their intuition and deliberation process.

    “I want that human perspective there as well, at least as a counterpoint,” says Albon, as someone who loves an occasional rabbit hole deep dive into information. 

    That’s not the general consensus.

    In 2025, some ChatGPT users began to notice that the chatbot seemed to agree with them constantly, even in potentially harmful situations. OpenAI CEO Sam Altman later acknowledged that this was a result of the latest updates, which made the chatbot embrace toxic positivity and become “too sycophant-y and annoying.” The unexpected layer to this problem was that, when the team tried to fix the issue, some users asked for its old “yes man” support tone back, said Altman.

    The dependence on these tools affects us in ways we don’t yet completely contemplate. A Stanford study concluded that AI’s flattering behavior shapes how we think about society and ethics.

    Reading, learning, and applying knowledge are changing drastically in the age of AI. Last year, Wikipedia pageviews dropped by 8% as AI summaries overtook search engine results. There have been many reports that people are reading AI-generated book summaries rather than the books themselves. 

    For organizations, this introduces a new risk category: not just flawed information, but diminished user capacity to evaluate it.

    Finding Counterstructures 

    AI has given us reasons to be optimistic and to worry, but one undeniable fact is that AI is here to stay. It is instinctive for humans to have the final say. But the history of human-machine interaction tells us that it is difficult to achieve.

    Albon advises, “Spend a little time with material not designed to just engage and enrage you.” He encourages us to embrace the wisdom of the crowds, where hundreds of individuals have spent decades carefully selecting wording to capture all possible perspectives.

    One needs to rethink when to use AI and, more importantly, when not to.

    The verification–velocity tradeoff requires coordinated, organization-wide decisions rather than isolated technical fixes. At the C-suite level, leaders must define where the organization sits on this spectrum and elevate trust to a measurable performance indicator, aligning product, revenue, and AI strategies around a clear stance on verification. 

    For functional leaders, the challenge is operational: product teams need to design systems that make sources visible and enable traceability; marketing teams must adapt to discovery environments increasingly mediated by AI rather than traditional search; and data and AI teams must build constraints that prioritize provenance over unchecked generation. 

    At the board level, the issue becomes one of governance. Directors should assess exposure to AI-related trust erosion by asking what proportion of outputs are verifiable, where the organization relies on untraceable content, and how resilient the business model remains if user trust declines.

    Dealing with these big changes online can feel overwhelming, especially amid so much misinformation. That’s why new tools and communities are helping people check context and verify what they read. John Seely Brown, former chief scientist at Xerox and head of its Palo Alto Research Center, recommends “rigorous inquiry”—asking questions like ‘What’s the agenda behind this information?’, ‘How current is it?’ and ‘How does it connect with what else I’m finding?’ In short, question everything around you.

    User-created communities are still working together to solve these problems. As Doctorow says at the start of his book, Enshittification, “It’s not just you. The internet is getting worse, fast.”

    Topics

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.