What Are The Signs Of Biased Science Papers?


Stephen Luntz


Stephen Luntz

Freelance Writer

Stephen has a science degree with a major in physics, an arts degree with majors in English Literature and History and Philosophy of Science and a Graduate Diploma in Science Communication.

Freelance Writer

stock scientists

Bias in science can reflect who is around the table, but it can also be seen in what gets published, particularly when it shouldn't be. Alexander Raths/Shutterstock

In theory, science should be entirely free of bias. That's never true, but we can still strive to minimize distortions. Yet to do that effectively, it helps to know how much there is at the moment. A paper in Proceedings of the National Academy of Sciences attempts to measure certain types of scientific bias and to reveal which fields are the worst.

Scientific bias can operate in many ways. It comes out in the people that get hired and in the questions that get asked. First author Dr Daniele Fanelli of Stanford University, however, focused on statistical analysis of whether certain sorts of papers were more likely to announce suspiciously large effects.


By looking at meta-anaylses across 22 disciplines that compared the results of different primary studies looking at the same questions, Fanelli concluded there is only a small bias, on average, towards overstatement of effects. However, in certain fields, and for papers published under certain conditions, the problems are greater, sometimes to a disturbing extent.

A matter of major concern is the possibility that the proportion of unreliable publications is rising, something that has been attributed both to increasing pressure on researchers to publish more and the rise in pseudo-journals pretending to be peer-reviewed. Fanelli found evidence of an increase in the proportion of exaggerated papers over time, particularly in the social sciences, but overall concluded: “Most of these bias patterns may induce highly significant distortions within specific fields and meta-analyses, but do not invalidate the scientific enterprise as a whole.”

Small early studies often report quite dramatic findings. Later research frequently finds that the effects described in the original study are real, but smaller than initially reported. For example, medical trials funded by industry might announce a drug worked amazingly well, only for independent research to show a smaller, or non-existent, benefit.

This is not always a product of financial pressure. Some early studies accurately report a large effect on a specific population – such as on people in a certain age bracket – but later research shows the population as a whole are less responsive.


These early, but exaggerated, papers were likely to be highly cited. After all, if you are not only the first to study something, but make big claims, other people will take notice and respond, even if it is only to show you've stuffed up. With career advancement often dependent on how often a paper is cited, there is clearly a danger that scientists could be rewarded for being wrong.

Early career researchers are more likely to publish exaggerated results, Fanelli found. It's not clear whether this reflects inexperience, the pressure on younger scientists to get themselves into more secure positions, or simply because those scientists who do shoddy work get weeded out.

Small teams are more prone to publish work that subsequently gets discredited than larger collaborations, but there Fanelli found no relationship between the volume of publications an author is involved in and the credibility of their work.

Authors who had previously or subsequently had a paper withdrawn were particularly prone to report large effect sizes, although whether this indicates they are more likely to be sloppy, or are actually fraudulent, has not been settled.


  • tag
  • fraud,

  • scientific bias,

  • effect sizes,

  • overstating results,

  • publish or perish