Questionable research practices and why we need open science

There are several reasons why scientists are in dire need of open research practices, not at least for ethical reasons by not misleading themselves or others.

Here I've summarized some of the key reasons and articles that everyone devoted to the improvement of the (social) sciences should read. I have kept it short to make it manageable with a busy schedule.

If you still don't have time to read everything, just read this one:

Nelson, L. D., Simmons, J., & Simonsohn, U. (2018). Psychology’s Renaissance. Annual Review of Psychology, 69(1), 511–534. doi:10.1146/annurev-psych-122216-011836

By and large, humans regularly interpret ambiguous evidence in favor of an hypothesis rather than against the hypothesis, and rewriting a research question into an hypothesis (which is standard practice in many fields) is highly questionable since it casts exploratory research into a confirmatory language. In addition, collecting many variables and then, during analysis, pick and choose which one to report is bound to give support for the hypothesis, even if the null hypothesis is true.

Articles on questionable research practices (QRP's)

1. Post hoc reasoning

Kerr, N. L. (1998). HARKing: hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196–217. doi:10.1207/s15327957pspr0203_4

Why you should read it: It is very easy to slip into hindsight bias ("I knew it all along") after you have seen the results. But turning your newly gained post hoc insights into an a priori hypothesis that is "tested" on the data is circular reasoning. (Although, not necessarily for extremely complex models.)

2. Flexibility in data analysis

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychological Science, 22(11), 1359–1366. doi:10.1177/0956797611417632

Why you should read it: The authors show how easy it is to get statistically significant results by selectively reporting only the significant results from a number of analyses. The authors also promote a 21 word solution.

3. Difference between exploratory and confirmatory research

Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Maas, H. L. J., & Kievit, R. A. (2012). An Agenda for Purely Confirmatory Research. Perspectives on Psychological Science, 7(6), 632–638. doi:10.1177/1745691612463078

Why you should read it: Sets out the differences between exploratory research (fishing for statistically significant results by capitalizing on chance) and confirmatory research (testing if a proposed theory/hypothesis is accurate). Needless to say, scientist have to be open about what kind of research they are doing. 

4. Interpreting evidence in favor of existing beliefs

Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220. doi:10.1037/1089-2680.2.2.175

Why you should read it: I definitely think you should read about confirmation bias after the other three previous articles. Because this bias makes a whole lot more sense when considering the previous articles. Note, however, that some confirmation bias researchers also ignore evidence that speaks against confirmation bias (for examples, see Evans, Newstead, & Byrne, 1993).

5. Bias in science

Crawford, J. T., & Jussim, L. J. (Eds.). (2017). The politics of social psychology. NewYork, NY: Routledge.

Why you should read it: Since we tend to interpret evidence in favor of existing beliefs, to some extent, this can have a large societal consequence when the majority of researchers share the same political goal. Can we really trust research that promotes political goals, rather than truth-seeking? This book shows examples of when politics gets in the way of truth-seeking.

6. But this isn't a big problem, right?

John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychological Science, 23(5), 524–532. doi:10.1177/0956797611430953

Why you should read it: After you have read the previous articles, you might think "Well, this is clearly important, but it isn't that widespread... is it?". John et al's article gives some insght in how frequent questionable research practices actually are.


The fact that scientists should write how the research was actually conducted seems completely self-evident. But not in an competetive academic market. Post hoc (motivated) reasoning seems to be very prevalent. To quote Paul Meehl, “a tremendous amount of taxpayer money goes down the drain in research that pseudotests theories”.

Clearly, everything is related to psychology here, so perhaps it's just that psychologists are terrible at research? Even if they are, the problems has still been documented in medicine and political science as well. And it has nothing to do with quantitative or qualitative methods. Human biases don't tend to restrict themselves to certain faculties or methods.

How do we avoid questionable research practices?

Several proposals for solutions exists. For example:

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2012). A 21 Word Solution. Social Science Research Network.

Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Maas, H. L. J., & Kievit, R. A. (2012). An Agenda for Purely Confirmatory Research. Perspectives on Psychological Science, 7(6), 632–638. doi:10.1177/1745691612463078

Wicherts, J. M., Veldkamp, C. L. S., Augusteijn, H. E. M., Bakker, M., Aert, V., M, R. C., … M, M. A. L. (2016). Degrees of Freedom in Planning, Running, Analyzing, and Reporting Psychological Studies: A Checklist to Avoid p-Hacking. Frontiers in Psychology, 7. doi:10.3389/fpsyg.2016.01832