Can We Trust Psychological Research?

During the last few years there has been increased discussion about the reliability and trustworthiness of psychological research.

Skepticism has been fueled not only by several high profile cases of outright scientific fraud (see the cases of Diederik Stapel and Dirk Smeesters), but also by numerous failures to replicate previously “established” scientific findings.

Consider, for example, the recent findings of the “Reproducibility Project,” the largest collaborative effort to replicate published research in psychology thus far. Researchers selected a sample of papers published in prominent psychological journals and found that, out of the 100 findings selected for inclusion in the project, only 39 could be replicated according to conventional standards and pre-established criteria.

Clearly, psychological scientists should be concerned.

When a sizeable percentage of published research cannot be reproduced by independent and objective investigators, then what right does a field have to consider itself a science? Unreliable findings diminish the integrity of our field and legitimize the popular notion of psychology as nothing more than a “soft science.”

But the impact of unreliable psychological research on public well-being far outweighs the impact to our collective professional ego. Repeated failures to replicate published research undermine trust in our work, as well as trust in the social, educational and health-related recommendations we offer to the public based on that work.

Healthcare providers and patients struggling with mental illness trust psychological scientists to make treatment recommendations based on solid, empirical evidence. Patients seek out treatments (and avoid others) based on our recommendations. Similarly, teachers and school administrators trust psychological scientists to recommend teaching and learning strategies that are validated by objective and reproducible research. Students and teachers alike invest time and energy in strategies that our research ostensibly suggests will lead to improved academic performance.

And although psychology is not the only scientific discipline in the midst of a “replication crisis (some evidence suggests that reproducibility of research in cancer biology and drug discovery might also be quite poor),” the obvious question is why so much psychological research fails the test of reproducibility.

Although I won’t try to provide a definitive answer here, it’s worth noting (as others have elsewhere) that in the past decade there has been a dramatic increase in the number of papers retracted from psychological journals – papers that have now effectively been erased from the scientific record.

Figure 1 below shows the percentage of articles that were retracted from psychological journals each year from 1989 to 2015, based on data I gathered from the academic database PsycINFO.

Figure 1: Number of Journal Articles in Psychology and Percentage of Articles Retracted (1989 – 2015).

percent retractions

In 1989, there were only 2 retractions out of a total of 37,742 published journal articles (0.0053%).

Meanwhile in 2013, there were 69 retractions out of a total of 137,514 published journal articles (0.0502%).

So although retractions are fairly uncommon (~ 0.01% on average over the past 26 years), they have become increasingly less uncommon in recent years, increasing in frequency by a factor of 4.5 from 1989 to 2015.

And as Figure 2 makes clear, most retractions in psychology come from the area of Social Psychology.

Figure 2: The Top 10 Journals with the Most Retractions from 1989 to 2015.

Retractions by Journal

So what has been going on since around 2001, when the rate of retractions in psychology first started to creep up?

Is the increase in the retraction rate from 1989 to 2013 due to an increase in the prevalence of fabricated data and extreme scientific misconduct? Or is it due to more subtle factors, such as an increase in the prevalence of highly questionable and sloppy research practices (e.g., p-hacking and data snooping)?

Or is it both?

Or is it neither?

Have psychological researchers just gotten sloppier and less rigorous in recent decades? Although it’s difficult to say for sure, consider the data presented in Figure 3, showing that, since 1989, there has been a steady increase in the percentage of papers corrected following publication (0.98% in 2015 vs. only 0.28% in 1989).1

Figure 3: Number of Journal Articles in Psychology and Percentage of Articles Corrected Following Publication (1989 – 2015).

percent corrections

Then again, perhaps the increase in the retraction rate in recent years says nothing at all about whether there has been an increase in the prevalence of scientific misconduct in psychology.

Perhaps the rate of scientific misconduct has been fairly constant all these years and the reason for so many retractions today vs. several decades ago is that the psychological community has gotten better at policing itself. Perhaps, as a field, we have gotten better at detecting the misconduct and questionable research practices that have always been there.

If so, then it bodes well for the integrity of our science that the retraction rate has finally started to decline from that all-time high in 2013.

Whatever the case may be, there’s no question that psychological scientists and journal editors take the issues raised here very seriously. And with renewed focus on the importance of replication, transparency, and standardized experimental and statistical procedures, our “soft science” just might finally be starting to harden up a bit.

Have an opinion on the “Replication Crisis” is psychology? Feel free to leave a comment or question below.

 

1 Although the data shown in Figure 3 are interesting, they don’t necessarily suggest that psychological science has gotten less rigorous. For one thing, I’d be curious to see how the rate of post-publication corrections in other scientific fields compare to that of psychology. Moreover, published papers are corrected for a variety of reasons, many of which have nothing to do with experimental methodology and interpretation of results (e.g., relatively minor corrections to a published figure or table). As such, the increase depicted in Figure 3 might speak more to an increase in sloppy manuscript editing than to an increase in sloppy research design and data analysis.

 

Brian Kurilla is a psychological scientist with a Ph.D. in cognitive psychology. You can follow Brian on Twitter @briankurilla 

3 thoughts on “Can We Trust Psychological Research?

  1. The failure to replicate does not only affect aspects of psychology. An interesting study by Harry Collins (2001, Social Studies of Science 31/1, 71-85) reported how physicists in the UK and US failed to replicate experiments by Russian scientists. When Russian and western scientists could work together they discovered what they had taken as an irrelevant aspect of the experimental set-up had been crucial for the results the Russians obtained. (See Wikipedia entry for Harry Collins for other studies of the actual practice of scientists).

    Experiments may work for reasons the experimenters did not appreciate. In so far as they assumed success was due to hypothesised relationships, of course they were wrong (and this could have important consequences, or perhaps explain why some things that ‘worked’ in the lab did not translate into successful treatments). Failed replication attempts may point to the possibility that the initial studies were right, but not for the reasons given, which should prompt us to look further into what was actually done rather than assume faulty procedures in the initially successful studies.

    1. True, experiments can work out for unintended and unanticipated reasons. And when presumed irrelevant factors are not taken into account during a replication attempt, replication will obviously be unsuccessful. I suppose this issue points to the importance of researchers clearly communicating every detail of their methodology (a high ideal that is probably rarely met), as well as the usefulness of having researchers register experimental protocols prior to publication.

Leave a Reply

Your email address will not be published. Required fields are marked *