How to Become a Critical Reader of Academic Research
While social science has provided enormous insight to help managers be more effective, not all research is good research. Here’s how to be a smart consumer of new ideas.
News
- Agentic AI Set to Reshape 40% of Enterprise Applications by 2026, new research finds
- Abu Dhabi Launches Quantum-Secure Communications Testbed to Future-Proof Digital Infrastructure
- Google Introduces AI Mode in MENA for Complex and Multimodal Search
- How Intelligent Choice Architectures Are Redefining Decision-Making
- AI Megaprojects Push UAE and Saudi to Global Frontlines in AI Development
- DXC and Boomi Join Forces to Supercharge Agentic AI Adoption at Scale

Carolyn Geason-Beissel/MIT SMR | Getty Images
ON JULY 15, 2021, I received an email from three coauthors of a data-sleuthing website called Data Colada. They informed me that they would soon be publishing compelling evidence that a field experiment in my coauthored 2012 research paper “Signing at the Beginning Makes Ethics Salient and Decreases Dishonest Self-Reports in Comparison to Signing at the End” was based on fabricated data.1
What had popularly and prominently become known as the “signing first” paper claimed to show that when people sign a statement promising to tell the truth before they fill out a form, such as a tax or insurance form, they provide more honest information than when asked to sign such a statement after providing the requested information. The signing-first paper was widely seen as an effective means of “nudging” more ethical behavior. The paper was widely disseminated in the business world, and many organizations (including Slice Insurance, a consulting client of mine) had taken our advice and moved the signature lines in their forms to try to improve people’s honesty.
Data Colada’s allegations of fraud shocked me. Though the possibility of fraud had never occurred to me, I had tried and failed in the years before I heard from Data Colada to replicate the signing-first effect with colleagues not connected to the original paper. In 2020, after inviting the original authors to join us, we published a paper about this failure to replicate the results, and I tried to convince my coauthors on the original paper that we should retract the 2012 paper.2 The majority were against retraction.
My 2012 coauthor Dan Ariely of Duke University had claimed to have gathered the data for the field experiment from Hartford Insurance, but this now seemed in doubt. The Data Colada authors confided that they also had suspicions about one of the paper’s other experiments, whose data had been provided by my Harvard Business School colleague Francesca Gino. Gino had been my advisee and had become my coauthor, friend, and peer. I was shaken to hear that Data Colada suspected she had committed fraud on this and at least three other papers.
After conferring with me, the Data Colada team went on to present their suspicions about the field experiment in a blog post and privately took their concerns about Gino’s work to Harvard. In media reports, Ariely responded by suggesting that Hartford Insurance was solely responsible for the fraudulent material. Hartford would later provide convincing evidence that it was not the source of fabricated data. In 2023, after a two-year investigation, Harvard put Gino on administrative leave. (In May 2025, Harvard revoked her tenure; Gino maintains that she has not committed academic misconduct.) Later in 2023, Data Colada posted evidence suggestive of fraud in one of the two laboratory studies in the signing-first paper (and in at least three other papers by Gino).
Social science research findings that suggest ways to influence the behavior of employees, customers, or negotiation partners are often disseminated via business and management publications. The academic stamp of approval encourages organizational leaders to give credence to the findings and adjust their own practices accordingly. The apparent finding that people were more honest when they signed their names before filling out a form rather than after led some organizations to change their forms. Later, they learned that they may have wasted time and money. More broadly, this and other scandals, as well as evidence that social science has been affected more broadly by questionable research practices, diminish the credibility of management research.
In a world where social science research increasingly makes headlines for the wrong reasons, how can leaders know which research to trust? Here, I provide an overview of the growth of interest in social science findings and the current crisis, and then offer guidance on how to effectively consume social science findings.
The Rising Tide of Behavioral Research
The first decade of the 21st century was a great time to have expertise in psychological science. University psychology departments and research labs grew significantly from 2000 to 2010, and business schools began hiring more Ph.D.s in psychology and organizational behavior. Psychology also influenced other disciplines, including the emergent field of behavioral economics and the related areas of behavioral finance and behavioral marketing. In 2002, psychologist Daniel Kahneman won the Nobel Memorial Prize in Economic Sciences for his research on human judgment and decision-making under uncertainty.
Psychological researchers whose ideas appealed to managers began to be rewarded with consulting contracts, and as they attracted participants to lucrative executive education programs, their institutions benefited as well. Psychology books aimed at the general public proved to be a successful niche: Kahneman’s Thinking, Fast and Slow, Carol Dweck’s Mindset, and Adam Grant’s Hidden Potential each sold more than a million copies.
This popularization of psychology shifted researchers’ incentives, as the impact of our work was increasingly measured not just by its influence within our field but also by its resonance with the public. But doubts about its credibility would soon reemerge.
The Replicability Crisis
While devastating when it happens, known cases of fraud are extremely rare. Social science struggles with a broader problem, though: the fact that many studies fail to replicate when the same researchers repeat the experiment. When research fails to replicate, it may be because the original finding was weak and doesn’t hold up under slightly different conditions. It may be because the original researchers ran a study multiple times in a variety of ways until they got results that supported their hypothesis, leaving out the results that did not. In addition, researchers are more inclined to look for errors in their methods and data when they do not like the results.3 When the results are consistent with what the researchers predict, those researchers may well miss errors that helped them confirm their predictions.
Data Colada has referred to such practices as p-hacking — the inappropriate manipulation of data to create results that are significant. The name comes from academic literature’s measurement of the probability (p) that a finding could have occurred by chance. P-hacking results in the publication of conclusions that are less likely to hold up as accurate over time.
In 2012, psychologist Brian Nosek led a large-scale project to replicate 100 studies published in three of the most prestigious psychology journals to see whether the initial findings could be reproduced.4 Science reported the results of the effort in 2015: While 97 of the original studies claimed significant findings, only 36% of the replications were significant.5 Many senior psychology scholars were not happy with Nosek’s publication and worked hard to explain away his results.6 While some of the critiques were valid, overall social scientists were shocked and disturbed by the magnitude of the replication failure.
The various forms of p-hacking are qualitatively different from making up results. But if the results do not replicate due to p-hacking, this is still an important issue for managers looking to put research findings into practice.
How Managers Can Identify Good Research
Given the problems of outright data fabrication and p-hacking bias, how should business leaders approach research that they find intriguing? I suggest adopting the following three practices to vet research.
1. Read the research critically. Imagine that you read about a new behavioral science finding that’s relevant to challenges you’re facing in your organization. Before considering how you might incorporate the finding into your practice, read the academic paper reporting on the original research. Approach it critically and skeptically, thinking about alternative explanations for the results being reported. Check to see whether the researchers conducted their work under the guidelines provided by the open science movement (more about that in a moment). Failure to disclose the methodology, preregister a research plan, and make data available to all is a red flag.
2. Find out whether the results have been replicated. Investigate whether the experiments have been repeated by other research teams where the original authors weren’t part of the effort. It is relatively easy to identify all of the papers that have referenced the original paper via Google Scholar. The more an idea has been replicated, the more credible it is. Ideas that intrigue you that have not been replicated by others demand more careful assessment.
3. Test ideas before implementing them broadly. I have worked with many executives who, if they like an idea of mine and trust me, are ready to incorporate it throughout their organization, worldwide. I urge them to slow down and test the idea with a small group first. There are many reasons why an idea that worked in a laboratory or in one industry might not work in another setting. I find it more difficult to get executives to test an idea than to convince them to implement it broadly, and I have long seen this as a mistake. But amid the current replicability crisis, it is a bigger mistake than ever.
It’s important to note that managers who try to determine whether a study’s results have been replicated by other labs will often find that they have not — for the simple fact that universities and journals reward researchers for coming up with new, original findings, not for repeating others’ studies. This underlying dynamic contributes to the replication crisis and weakens science in the long run.
Within the field, researchers need to adopt open science practices of disclosure, preregistration, and making data fully available.7 Disclosure means that any reader or reviewer of the original research paper should be able to fully understand what a research team did at each stage of the research process, including when they specified their hypothesis. This relates to preregistration, the practice of submitting predictions and a data analysis plan before collecting or analyzing data, so that researchers aren’t tempted to change their hypothesis to fit unexpected results. Researchers should subsequently make all of their data publicly available — ideally, by posting it online. Think of these practices as good hygiene that should be expected of researchers in the 2020s.
Universities that host and support social science research should join the open science movement to safeguard their prestige and contribute to the integrity of the science they foster. And when allegations of research misconduct arise, universities should respond with transparency — not, as is often the case with U.S. universities, with secrecy that prioritizes reputations over the integrity of science.8
It’s also incumbent upon researchers like me to push past professional niceties and ask harder questions of coauthors and co-investigators. In numerous stories of academic fraud that I document in my book, Inside an Academic Scandal, many research papers’ coauthors had hints that something was wrong. I had this experience myself: On the signing-first paper, I was the coauthor who asked the most questions about the orange flags that emerged. In retrospect, I didn’t ask enough questions, and I too easily accepted answers that I wanted to be true.
Behavioral science research can offer valuable insights that improve organizations. We need to both empower managers to read it more critically and strengthen norms and standards in the academic world to validate its credibility.
References
1. L.L. Shu, N. Mazar, F. Gino, et al., “Signing at the Beginning Makes Ethics Salient and Decreases Dishonest Self-Reports in Comparison to Signing at the End,” Proceedings of the National Academy of Sciences 109, no. 38 (Sept. 18, 2012): 15197-15200.
2. A.S. Kristal, A.V. Whillans, M.H. Bazerman, et al., “Signing at the Beginning Versus at the End Does Not Decrease Dishonesty,” Proceedings of the National Academy of Sciences 117, no. 13 (March 16, 2020): 7103-7107.
3. R. MacCoun and S. Perlmutter, “Blind Analysis: Hide Results to Seek the Truth,” Nature 526, no. 7572 (Oct. 8, 2015): 187-189.
4. These journals were Psychological Science, the Journal of Personality and Social Psychology, and the Journal of Experimental Psychology: Learning, Memory, and Cognition.
5. A.A. Aarts, J.E. Anderson, C.J. Anderson, et al., “Estimating the Reproducibility of Psychological Science,” Science 349, no. 6251 (Aug. 28, 2015).
6. D.T. Gilbert, G. King, S. Pettigrew, et al., “Comment on ‘Estimating the Reproducibility of Psychological Science,’” Science 351, no. 6277 (March 4, 2016): 1037.
7. M.H. Bazerman, “Inside an Academic Scandal: A Story of Fraud and Betrayal” (Cambridge, Massachusetts: MIT Press, 2025).
8.Ibid.