Bronte O’Brien studies the study behind the studies of psychological science and prescribes a grain of salt in viewing the results.
It’s always a thrill when a new scientific study creates shockwaves through popular culture.
One such study, Estimating the reproducibility of psychological science has been recently published in Science magazine by researchers at The University of Virginia. This mammoth undertaking replicated 100 psychological studies and found that they could only reproduce 37% of the original results to a statistically significant degree.
If you swallow clickbait headings and lurid prose, some reports are urging us to think that the implications of this study are destabilising, terrifying and prove that psychology is nonsense. The study itself is far more nuanced – and interesting – than that.
This study aimed to add to conversations about the integral concept of reproducibility. The more a study is repeated, the more we can ascertain whether or not results can be confirmed or rejected and the greater the confidence is in the scientific claims. As the study contends, “Innovation points out paths that are possible; replication points out paths that are likely; progress relies on both.”
There are numerous reasons why initial results may not be replicated. This includes false positives, selective reporting, selective analysis, not conducting the follow up with the same conditions as the original and chance.
The study has certainly raised concerns about how science is conducted and why reproducibility can be less of a focus. Incentives for scientists, such as being published in a prestigious journal or being picked up by the media, prize novelty and innovation.
As the study states, “Innovative ideas become old news fast.”
This study is only providing tentative evidence itself, and can be subject to its own potential biases and confounding variables. Furthermore, it raises questions about when or whether this study itself will be repeated.
Importantly, what appears to have been overlooked by a lot of media reports is that failing to reproduce results does not mean that the original results are false.
“After this intensive effort to reproduce a sample of published psychological findings, how many of the effects have we established are true? Zero. And how many of the effects have we established are false? Zero. Is this a limitation of the project design? No. It is the reality of doing science, even if it is not appreciated in daily practice.”
Science doesn’t deal with certainty, but rather probability. Media reporting often seems to conflate the specific meaning of scientific proof or fact with what “proof” and “fact” typically mean in general parlance.
A lot of initial studies seem to be reported by some facets of the media as though they provide cold, hard facts instead of tentative claims. Maybe that’s why when a study like this comes out, it is communicated as an earth-shattering tragedy. Maybe “could indicate” or “may show” just doesn’t have a good ring to it.
As the authors of this study contend, “Humans desire certainty, and science infrequently provides it.” (Would be much nicer if science “never” or “always” did, huh.)
In short, this study is telling us that they’re less confident about the results they couldn’t replicate, more confident about the results they could, and overall in need of more investigation to add to the conversation.
Maybe the “bleakness”, the “crisis”, the “psycho-babble” doesn’t inhere so much in this study itself but in a flurry of sloppy media reports. Maybe there’s a parallel here – that incentives in the media value shock and melodrama, that it has its own selective reporting, and that reproducing scientific thought is less of a priority.
“Psychology in turmoil” sounds better, don’t you think?