Skip to content
Neuropsych

Studies likely to be wrong have 153 more citations

Science journals may be lowering their standards to publish studies with eye-grabbing — but probably incorrect — results.
Portrait of young scientist overworked in the hospital laboratory
Key Takeaways
  • Science is facing a replication crisis, namely, that many studies published in top journals fail to replicate.
  • A new study examined the citation count of “failed” studies, finding that these nonreplicable studies accumulated 153 more citations than more reliable research, even after they are shown to be nonreplicable.
  • The study suggests the replication crisis might be driven, in part, by incentives that encourage researchers to generate “interesting” results.

What’s one way to get a quick boost of confidence? If you watched the widely shared 2012 TED Talk “Your body language may shape who you are,” you might think the answer is to strike a power pose.

The idea, detailed in a 2010 paper published in Psychological Science, is that striking a triumphant posture for a couple minutes causes neuroendocrine and behavioral changes in people, helping them to feel more powerful and perform better at various tasks.

U.K. political candidatesCredit: Kieron Bryan (@kieronjbryan) / Twitter

In addition to looking ridiculous, the benefits of the “power pose” probably aren’t real. Since 2015, more than a dozen studies have tried and failed to replicate the effects reported in that 2010 paper. It’s far from the first failed replication.

The replication crisis

Over the past two decades, the repeated failure to reproduce findings in the research literature, especially in the social and biomedical sciences, has been dubbed the replication crisis. Why is it a “crisis”?

Replication is a key principle of the scientific method. Successful replication increases the probability, and therefore confidence, that a given claim or effect is true: After all, if one study finds X, other studies should also find X, assuming they follow or build upon the original study design.

Despite widespread controversies and concern about the replication crisis over the past two decades, there’s little evidence that things are getting better. The problem isn’t just that many studies are nonreplicable but also that findings from nonreplicable studies continue to be cited by subsequent studies. “Failed papers,” as a 2020 analysis dubbed them, “circulate through the literature as quickly as replicating papers.”

Bad science travels fast

A new study published in Science Advances suggests the problem may be even worse than we thought, finding that nonreplicable papers receive 16 more citations per year than replicable ones, on average. Over time, that translates to 153 more citations.

This imbalance generally held true even after replication attempts revealed “failed” papers to be nonreplicable. It also persisted after controlling for factors like number of authors, percentage of male authors, language, and location.

Why do journals publish nonreplicable studies? It may come down to hype. “When the results are more ‘interesting,’ they apply lower standards regarding their reproducibility,” the new study suggests.

Stuart Richie made a similar argument in his 2020 book titled Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth. He suggested that because researchers face institutional pressures to publish papers and earn grants, they’re less likely to conduct dry yet valuable “workhouse studies” and more likely to pursue “showy and ostentatious findings” that generate media attention.

In short, incentives may be pushing some researchers away from the pursuit of truth.

Comparing citations

The new research included data from studies featured in three major replication projects conducted between 2015 and 2018. According to the paper, each of the three projects:

“tried to systematically replicate the findings in top psychology, economics, and general science journals. In psychology, only 39% of the experiments yielded significant findings in the replication study, compared to 97% of the original experiments. In economics, 61% of 18 studies replicated, and among Nature/Science publications, 62% of 21 studies did.”

The researchers then compared this replicability data with the number of citations those studies received, collected from Google Scholar from the date of publication until the end of 2019. The results showed that when replication projects published data revealing studies to be nonreplicable, there was no significant effect on how often those studies were cited in the future. In other words, the studies continued to be cited, even though they were shown to be incorrect.

Smarter faster: the Big Think newsletter
Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

The average yearly citation count per year for studies that were not replicated (according to P value of the replication) in each replication study [(A) for Nature/Science, (B) for Economics, and (C) for Psychology papers in replication markets] and for those that were replicated. Serra-Garcia et al.

But couldn’t some citations of nonreplicable studies have come from studies that were critical of the past findings? The researchers acknowledged this possibility but noted that only twelve percent of subsequent papers acknowledged that the findings they cited had failed to replicate.

Predicting replicability isn’t difficult

Ignorance or a lack of intuition likely doesn’t explain why the reviewers of top academic journals accept nonreplicable papers or publish subsequent papers that cite those findings. After all, academics and laypeople alike are quite good at predicting which studies will replicate. A 2020 study found, for example, that laypeople were able to guess the replicability of social science studies with above-chance accuracy (59 percent).

Similarly, a 2018 analysis found that psychologists correctly predicted the replicability of psychology studies with an accuracy of 70 percent, while a 2021 paper found that experts could predict the replicability of behavioral and social science papers 73 percent of the time.

These findings seem to bolster the argument that hype-related incentives are contributing to the replication crisis. Still, in the spirit of replication, it’s probably worth waiting until these findings themselves are replicated by future research.


Related

Up Next