The frequency with which scientific papers are cited is often used as a measure of quality, for example determining if the authors get grants or promotions. It would be a major problem, then, if bad research was cited more often than good work. Unfortunately, that is what a study of 139 scientific papers has found.
Psychologists have been wrestling recently with the discovery that many papers published in the top journals in their field fail replication, which means different results are produced when others try the same experiment. The reasons for this are disputed and it doesn't always mean the original paper was entirely wrong. For example, a finding may be accurate for a sample of participants in one country, but appear unreproducible when the study is repeated in a different culture.
Nevertheless, when only 39 out of 100 papers in leading psychology journals could be replicated, something is definitely off. Smaller attempts to extend the research to other fields produced better, but still discouraging, findings. Eleven out of 18 papers in economics journals and 13 of 21 in the general science journals Science and Nature were replicated successfully. Effects sizes were also usually smaller on replication.
Some scientists have argued this is not serious – bad research is usually forgotten because those working in most closely related fields can spot the flaws. Dr Marta Serra-Garcia and Dr Uri Gneezy of the University of California, San Diego, conclude the exact opposite is true.
Serra-Garcia and Gneezy found the papers that could not be replicated in these three samples averaged 153 times more citations in eight years than the ones that were subsequently confirmed. Although the sample of general science papers is small, the results were more extreme: papers that failed reproduction had 300 more citations than the good ones.
Part of the reason is that to rebut a paper you need to cite it, unless it is so bad you can get it withdrawn by the publishing journal or authors. Few papers are refuted hundreds of times, however, so most of these citations were positive. Indeed, the authors note in Science Advances; “Remarkably, only 12 percent of post-replication citations of non-replicable findings acknowledge the replication failure."
Of course, few people are deliberately positively citing papers they know are wrong (climate change deniers an exception). Instead, the authors think the papers that make big, surprising claims are the most interesting. If many studies reach the same conclusion and one says the opposite, it will stand out from the pack, but it’s also probably wrong. The same goes for a result that fits with what most people really want to believe.
"Interesting or appealing findings are also covered more by media or shared on platforms like Twitter, generating a lot of attention, but that does not make them true," Gneezy said in a statement.
Another explanation, complementary with the first, is prestigious journals have a good sense of which papers in their field are doubtful and knock most back, while publishing the ones they anticipate will boost their all-important citation index. "We also know that experts can predict well which papers will be replicated," the authors note. The value of a paper they expect to be highly cited may get them to lower their standards.
Such spreading of unreliable or inaccurate work can lead other scientists astray and mean the wrong people’s careers progress. Things really go wrong when false claims on contentious topics reach the wider public. The authors note the most famous example: Andrew Wakefield’s false and fraudulent paper on vaccination causing autism was very widely cited, inspiring the anti-vaccination movement with calamitous results.