In 2012, Shu, Mazar, Gino, Ariely, and Bazerman published a three-study paper titled, “Signing at the beginning makes ethics salient and decreases dishonest self-reports in comparison to signing at the end.”1 The paper demonstrated that asking people to sign a statement of honest intent before providing information—for example, when submitting an insurance claim—can significantly decrease dishonesty, compared to when people sign such a statement after providing information.
Since its publication, the paper has received hundreds of citations, becoming influential in the study of dishonest behavior. However, a failed 2020 replication and a viral article posted on the behavioral science blog Data Colada in August 2021 had people questioning the results of the studies and the integrity of the researchers.2
Unsurprisingly, these revelations sent shockwaves across the entirety of behavioral science. This was because the article accused Dan Ariely, one of the leading academics in the field, of data fabrication (an allegation which he has strongly denied).
Why did the authors of the Data Colada post accuse only Ariely of data fabrication, even though he had four other co-authors? This was because Ariely was the only author responsible for the study in question (Study 3). They came to this conclusion because of a number of anomalies in Study 3’s original data set that are difficult to explain as results of anything but deliberate manipulation.
One notable discovery was that Study 3’s data, which purported to look at the number of miles driven by auto insurance customers, had a distribution so uniform as to be statistically impossible. The researchers behind the blog post also examined the spreadsheet containing Study 3’s data and concluded that many data points had been duplicated and lightly edited.
We still don’t know for sure what happened in this case, and all of the paper’s authors have stated that they were unaware of these anomalies and do not know how they came to be. The purpose of this article is not to summarize everything that has come to light through the Data Colada post, but rather to examine why this type of fraud (if indeed it is fraud) happens in academia, and to explore some structural changes that could improve research practices everywhere.
Behavioral Science, Democratized
We make 35,000 decisions each day, often in environments that aren’t conducive to making sound choices.
At TDL, we work with organizations in the public and private sectors—from new startups, to governments, to established players like the Gates Foundation—to debias decision-making and create better outcomes for everyone.
Why does fraud happen in academia?
Academia is tough. As a graduate student in the behavioral sciences, I’ve witnessed firsthand the pressure people feel to succeed in this field. One of my professors has told me that he only sleeps four hours each night—and his situation isn’t unique. In an interview with Slate, an anonymous female business professor said she “killed [herself]” when she was working her way up from a mid-level undergraduate university to a top-level faculty job.3 In the process, she alienated her students, annoyed her academic advisor, and sacrificed her health.
These experiences are not surprising considering the importance of publishing papers in academic journals. Universities frequently use an individual’s number of publications and citations as a measure of their success and therefore, it has become one of the most important criteria for recruitment and advancement.4 This “publish or perish” culture has led researchers to scramble to publish whatever they can manage, instead of spending time to develop research that pushes the frontiers of science forward.4
The increasing competition has even driven some researchers to resort to unethical practices such as salami slicing (splitting the same research into many publishable “slices”), p-hacking (misusing statistical analysis techniques to try to find a publishable result in a data set), and even outright fraud. (It should be noted that not all questionable research practices are deliberate; see the look-elsewhere effect.)
The move toward Open Science
As awareness of these issues in behavioral science grows, the Open Science movement is emerging as a partial solution. Although “there is no single doctrine or paper that definitively captures Open Science,”5 it can broadly be defined as “a set of practices that increase the transparency and accessibility of scientific research.”6
One of the key components of Open Science is pre-registration, which usually occurs before the researcher(s) conduct their study. A research plan, which may include the research question(s), hypotheses, experimental design(s), data collection and analysis plans, etc. are all publicly registered on websites like AsPredicted.
The goal of pre-registration is for the researcher(s) to be transparent about the purposes and methods of the study. After they pre-register their study, they are not banned from changing their research plan, but rather they should justify and document why they made those changes. This is to prevent researchers from modifying their hypotheses post hoc, to align with any statistically significant results they may have happened to find in their data.
Many researchers also post their data and code on websites like Open Science Framework and Nature Scientific Data so that others can reproduce it, catch any errors, and conduct alternative/additional analyses. Publicly sharing code and data is also what makes replication studies possible. The team at Data Colada and a group of anonymous researchers were able to discover the fraudulent nature of Study 3’s data because the authors of the 2012 paper publicly posted the data. (With that being said, I personally believe that all of the authors of the 2012 paper subscribe to good research practices. It’s just that something went horribly wrong with Study 3’s data.)
As a behavioral science student, I am very pleased to know that many respected researchers are supporting the Open Science movement. Francesca Gino, one of the authors of the 2012 paper, said in response to the Data Colada article, “Though very painful, this experience has reinforced my strong commitment to the Open Science movement. As it clearly shows, posting data publicly, pre-registering studies, and conducting replications of prior research is key to scientific progress.”2a
Good things take time
Dr. Peter Higgs won the Nobel Prize in Physics in 2013 for his work on the mass of subatomic particles. On his way to Stockholm to receive the Nobel Prize in 2013, he spoke to The Guardian and said that “Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.”7 Even though he was talking about the field of physics, the same goes for the field of behavioral science and the social sciences at large.
Academia seems to have forgotten about the phrase “quality over quantity.” Publishing three high-quality research papers should be seen as just as productive, if not even more productive than publishing ten lower-quality papers. The hypercompetitive culture of the field has caused many researchers to resort to unethical practices like fabricating data in order to preserve their careers.
As we’ve seen from the testimony of people like the anonymous professor quoted above, these high demands can be destructive to the health and teaching ability of academics. But the negative consequences don’t just stop there. In the last decade, uptake of behavior science by organizations in the public and private sectors has rapidly increased; published insights have become the basis for policy changes, business initiatives, self-help programs, and more. If these results turn out to be fraudulent, it could have huge negative consequences for the decision-making of governments, organizations, and private individuals.
The Open Science movement is a step in the right direction, but it doesn’t get to the root of the problem. There will always be unethical research practices if the “publish or perish” culture persists. In my opinion, hiring and promotion decisions shouldn’t be heavily based on the number of papers published and citations garnered. Universities should consider a different approach, perhaps giving more weight to teaching abilities and student feedback. What kind of changes do you think should be made?
References
- Shu, L. L., Mazar, N., Gino, F., Ariely, D., & Bazerman, M. H. (2012). Signing at the beginning makes ethics salient and decreases dishonest self-reports in comparison to signing at the end. Proceedings of the National Academy of Sciences, 109(38), 15197-15200.
- Simonsohn, U., Simmons, J., & Nelson, L. (2021, August 17). [98] Evidence of Fraud in an Influential Field Experiment About Dishonesty. Data Colada. https://datacolada.org/98
undefined - Warner, J., & Clauset, A. (2015, February 23). The Academy’s Dirty Secret. Slate Magazine. Retrieved November 4, 2021, from https://slate.com/human-interest/2015/02/university-hiring-if-you-didn-t-get-your-ph-d-at-an-elite-university-good-luck-finding-an-academic-job.html
- Rawat, S., & Meena, S. (2014). Publish or perish: Where are we heading?. Journal of research in medical sciences: the official journal of Isfahan University of Medical Sciences, 19(2), 87.
- Hong, M., & Moran, A. (2019, February). An introduction to open science. American Psychological Association. Retrieved October 8, 2021, from https://www.apa.org/science/about/psa/2019/02/open-science.
- van der Zee, T., & Reich, J. (2018). Open Education Science. AERA Open.
- Aitkenhead, D. (2013, December 6). Peter Higgs: I wouldn’t be productive enough for today’s academic system. The Guardian. Retrieved October 8, 2021, from https://www.theguardian.com/science/2013/dec/06/peter-higgs-boson-academic-system.