How AI Skews Our Sense of Responsibility

Research shows how using an AI-augmented system may affect humans’ perception of their own agency and responsibility.

Reading Time: 5 min  



An MIT SMR initiative exploring how technology is reshaping the practice of management.
More in this series

  • Matt Chinworth/

    As artificial intelligence plays an ever-larger role in automated systems and decision-making processes, the question of how it affects humans’ sense of their own agency is becoming less theoretical — and more urgent. It’s no surprise that humans often defer to automated decision recommendations, with exhortations to “trust the AI!” spurring user adoption in corporate settings. However, there’s growing evidence that AI diminishes users’ sense of responsibility for the consequences of those decisions.

    This question is largely overlooked in current discussions about responsible AI. In reality, such practices are intended to manage legal and reputational risk — a limited view of responsibility, if we draw on German philosopher Hans Jonas’s useful conceptualization. He defined three types of responsibility, but AI practice appears concerned with only two. The first is legal responsibility, wherein an individual or corporate entity is held responsible for repairing damage or compensating for losses, typically via civil law, and the second is moral responsibility, wherein individuals are held accountable via punishment, as in criminal law.

    What we’re most concerned about here is the third type, what Jonas called the sense of responsibility. It’s what we mean when we speak admiringly of someone “acting responsibly.” It entails critical thinking and predictive reflection on the purpose and possible consequences of one’s actions, not only for oneself but for others. It’s this sense of responsibility that AI and automated systems can alter.

    To gain insight into how AI affects users’ perceptions of their own responsibility and agency, we conducted several studies. Two studies examined what influences a driver’s decision to regain control of a self-driving vehicle when the autonomous driving system is activated. In the first study, we found that the more individuals trust the autonomous system, the less likely they are to maintain situational awareness that would enable them to regain control of the vehicle in the event of a problem or incident. Even though respondents overall said they accepted responsibility when operating an autonomous vehicle, their sense of agency had no significant influence on their intention to regain control of the vehicle in the event of a problem or incident. On the basis of these findings, we might expect to find that a sizable proportion of users feel encouraged, in the presence of an automated system, to shun responsibility to intervene.

    In the second study, conducted with the Société de l’Assurance Automobile du Québec, a government agency that administers the province’s public auto insurance program, we were able to conduct more-refined analyses. We surveyed 1,897 drivers (mostly of Tesla and Mercedes cars with some level of autonomous driving capabilities) to look at the separate effect of each type of responsibility on the driver’s intention to regain control of the vehicle and found that only the sense of responsibility had a significant effect. As in the first study, the more trust respondents reported having in the automated system, the lower their intention to regain control behind the wheel. It’s particularly notable that only the proactive, individual sense of responsibility motivated respondents to act, which indicates that the threat of liability will be insufficient to prevent AI harm.

    In another study aimed at understanding the use of risk-prediction algorithms in the context of criminal justice in the U.S., a significant proportion of the 32 respondents relied excessively on these tools to make their decisions. We made a determination of overuse based on respondents reporting use of the tools to determine the length or severity of a sentence, strictly abiding by the tools’ results, and taking for granted the results provided by the tool. Besides raising fundamental legal and ethical questions about the fairness, equity, and transparency of such automated judicial decisions, this result also points to an abdication of individual responsibility in favor of algorithms.

    Overall, these initial results confirm what has been observed in similar contexts, namely that individuals tend to lose their sense of agency in the presence of an intelligent system. When individuals feel less control — or that something else is in control — their sense of responsibility is likewise diminished.

    In light of the above, we must ask whether human in the loop, which is increasingly understood as a best practice for responsible AI use, is an adequate safeguard. Instead, the question becomes: How do we encourage humans to accept they have proactive responsibility and exercise it?

    As noted at the outset of this article, managers tend to exacerbate the problem by encouraging trust in AI in order to increase adoption. This message is often in terms that denigrate human cognition and decision-making as limited and biased compared with AI recommendations — despite the fact that all AI systems necessarily reflect human biases in data selection, specification, and so forth. This position assumes that every AI decision is correct and superior to the human decision and invites humans to disengage in favor of the AI.

    To offset this tendency, we recommend shifting emphasis in employee communications from trust the AI to understand the AI to engender informed and conditional trust in the outputs of AI systems and processes. Managers need to educate users to understand the automated processes, decision points, and potential for errors or harm of the AI. It’s also critical that users are aware of and understand nuances of the edge cases where AI systems can flounder and the risk of a bad automated decision is greatest.

    Managers also need to position and prepare employees to own and exercise their sense of responsibility. Toyota famously models this by empowering anyone on the factory floor to shut down the production line if they see a problem. This will encourage employees to systematically question AI systems and processes and therefore maintain their sense of agency in order to avoid harmful consequences for their organizations.

    Ultimately, a culture of responsibility — versus a culture of avoiding culpability — is always going to mean a healthier and more ethically robust organization. It will be even more important to foster such a culture in the age of AI by leaving clear spaces of possibility for human intelligence.

    Otherwise, Roderick Seidenberg’s prediction about technologies far less powerful than current AI could materialize:

    The functioning of the system, aided increasingly by automation, acts — certainly by no malicious intent — as a channel of intelligence, by which the relatively minute individual contributions of engineers, inventors, and scientists eventually create a reservoir of established knowledge and procedure no individual can possibly encompass. Thus, man appears to have surpassed himself, and indeed in a certain sense he has. Paradoxically, however, in respect to the system as a whole, each individual becomes perforce an unthinking beneficiary — the mindless recipient of a socially established milieu. Hence we speak, not without justification, of a push-button civilization — a system of things devised by intelligence for the progressive elimination of intelligence!

    At the NextTech Summit, the region’s foremost summit focusing on emerging technologies, global experts, MIT professors, industry leaders, policymakers, and futurists will discuss AI Black Box, Quantum Computing, and Enterprise AI, among many other technologies, and their immense potential. The summit will be held on May 29, 2024, at Madinat Jumeirah in Dubai, UAE. 



    An MIT SMR initiative exploring how technology is reshaping the practice of management.
    More in this series

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.