Don’t Get Distracted by the Hype Around Generative AI

Tech bubbles are bad information environments.

Reading Time: 7 min 

Topics

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.
More in this series

  • Aad Goudappel/theispot.com

    Technology bubbles can pose difficult quandaries for business leaders: They may feel pressure to invest early in an emerging technology to gain an advantage over competitors but don’t want to fall for empty hype. As we enter a period of greater economic uncertainty and layoffs in multiple industries, executives are grappling with questions about where to cut costs and where to invest more.

    The rapidly developing field of artificial intelligence and machine learning poses a particular challenge to business decision makers. Investments in proven predictive models are increasingly seen as sound and are expected to drive an increase in spending on AI from $33 billion in 2021 to $64 billion in 2025. But further out on the cutting edge, generative AI is sparking a huge amount of noise and speculation.

    Generative AI refers to machine learning models — such as ChatGPT, Bing AI, DALL-E, and Midjourney — that are trained on vast databases of text and images to generate new text and images in response to a prompt. Headlines like “Ten ChatGPT Hacks You Can’t Live Without!” and “You’re Using ChatGPT Wrong! Here’s How to Be Ahead of 99% of ChatGPT Users” have begun to proliferate. Meanwhile, Axios reported that funding is pouring into generative AI, rising from $613 million in 2022 to $2.3 billion in 2023 — money that will only fuel the hype cycle.

    Business leaders who don’t want to miss a great opportunity but don’t want to waste time and money implementing oversold technologies would do well to keep in mind some fundamental realities about tech bubbles. First, these phenomena rely on narrative — stories that people tell about how the new technology will develop and affect societies and economies, as business school professors Brent Goldfarb and David Kirsch wrote in their 2019 book, Bubbles and Crashes: The Boom and Bust of Technological Innovation. Unfortunately, the early narratives that emerge around new technologies are almost always wrong. Indeed, overestimating the promises and potencies of new systems is at the very heart of bubbles.

    Business futurists and analysts have terrible track records when it comes to accurately predicting the future of technological development, because no one can foresee the creative ways in which humans will adopt and implement tools over time or the myriad opportunistic ways in which humans will use new tools to exploit or gain power over others. Or, as futurist Roy Amara put it in what became known as Amara’s law, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

    The unrealistic narratives that drive technology bubbles are already on display with generative AI. Some enthusiasts claim that ChatGPT is only a few steps away from artificial general intelligence — becoming an independent entity capable of cognition equal to or better than that of humans. Sam Altman, who, as CEO of ChatGPT maker OpenAI, has considerable skin in the game, has claimed that AI “will eclipse the agricultural revolution, the Industrial Revolution, the internet revolution all put together.” A bleaker view that nonetheless assumes a similar outsized impact for these large language models comes from the Future of Life Institute, which published an open letter calling for a six-month ban on training AI systems more powerful than GPT-4, the large language model underlying ChatGPT Plus, because they threaten humanity.

    Although these boosters and critics have wildly divergent perspectives, they are united in promoting feverish visions of the future that are disconnected from the reality of what businesses can reliably accomplish with generative AI tools today. They do little to help leaders understand how these technologies work and what risks and limitations they have, and to otherwise work through whether the tools will improve a company’s routines and contribute to its bottom line. The news media, itself an industry fueled by FOMO (fear of missing out), further inflates the bubble with breathless coverage that runs ahead of the facts. A recent Wall Street Journal article titled “Generative AI Is Already Changing White-Collar Work as We Know It” offers no real evidence of change in white-collar workplaces but rather presents business leaders’ speculations on the technology’s potential impact. Like other reports portending doom for workers, it points to a summary of a paper by researchers at OpenAI and the University of Pennsylvania that seeks to predict how many jobs will be affected by these new software systems.

    In fact, such forecasts have a history of inaccuracy. In a recent essay, economics professor Gary Smith and retired professor and technology consultant Jeffrey Funk pointed out that the OpenAI/Penn study used the same U.S. Department of Labor database as a 2016 Oxford University and Deloitte study that claimed it’s highly probable many jobs will be automated into nonexistence by 2030. Both studies attempted to quantify the percentage of jobs based on repetitive tasks and then projected how many of those jobs will be lost to technological change. Given that trends over the past seven years don’t seem to be bearing out the predictions made by the 2016 study, there’s little reason to believe such predictions will be accurate now.

    A Call for Caution

    With the poor track record of past predictions, executives must practice prudence, applying sober reasoning when confronted with hype about the future impacts of technologies. Teams will need to practice organized skepticism: not knee-jerk, dismissive doubting but, rather, rigorous and scientific assessment and reasoning. Claims about the efficacy of new technologies need to be scrutinized for empirical strength. Rather than getting caught up in questions that invite speculation, like, “How could this evolve?” or “What are the implications?” it’s better to start with questions that establish a factual basis: “What do we know?” and “What is the evidence?” Ask specific questions about how the technology works, how reliable its predictions are, and the quality of other outputs.

    Business leaders must be especially mindful to engage in critical thinking when information comes from known vectors of technology hype, including consultancies, vendors, and industry analysts.

    While experimenting with publicly available generative AI tools can be cheap and instructive, companies must carefully assess the potential risks of using new technologies. ChatGPT, for example, is known to create falsehoods, including listing references to texts that simply do not exist. Using this technology requires careful oversight, especially where the output from these generative AI systems might go out to customers and open up the company to reputational harm. Companies also run the risk of losing control of intellectual property or sensitive information by using systems without oversight, as Samsung discovered when its employees inadvertently leaked sensitive corporate data by feeding it into ChatGPT, which uses submitted information to further train the system’s models. I also know artists, designers, and publishers who refuse to use generative AI in ways that might harm their own or their clients’ intellectual property.

    Given these potential risks, companies that are going to experiment with generative AI should set ground rules for its use. An obvious first step is to require that any workers using these technologies for their work disclose it. A company’s use policies can also set basic requirements, such as insisting that generative AI use must not break existing ethical and legal codes, and organizations should consider limiting what kinds of company data are entered into these systems. The Society for Human Resource Management and other groups have recently published workplace guidelines for the use of generative AI, and business leaders would be wise to follow updates in such thinking.

    There are also other risks companies should be mindful of. Technology critics have been arguing that corporations will deploy generative AI in ways that will degrade workers’ lives, making them harder and worse. Leaders would do well to ensure that this does not happen and to instead promote uses of these technologies that make workers’ lives easier, less stressful, and more humane. FOMO and competitive pressures can be useful if they keep managers attentive to changes around them, but they should not allow these anxieties to steer them into irrational and imprudent decisions. The excitement around generative AI will very likely follow the course of enthusiasm for other digital technologies outlined in Nicholas Carr’s 2004 book, Does IT Matter? The adoption of digital technologies often creates short-term advantages within a sector, but these advantages slip away as the technologies become the norm — part of the corporate landscape, like text editors, spreadsheets, and customer relationship management systems.

    In other words, there is currently no evidence that your company will fall behind strategically or even be disrupted if you press pause to develop a thoughtful course of action rather than leaping headfirst into generative AI. In this context, leaders would be well advised to focus on the fundamental aims of their business and ask, “Will this system help us achieve our ends?” If someone says it will, ask them to prove it.

    Topics

    Frontiers

    An MIT SMR initiative exploring how technology is reshaping the practice of management.
    More in this series

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.