In theory, science should be entirely free of bias. That's never true, but we can still strive to minimize distortions. Yet to do that effectively, it helps to know how much there is at the moment. A paper in Proceedings of the National Academy of Sciences attempts to measure certain types of scientific bias and to reveal which fields are the worst.
Scientific bias can operate in many ways. It comes out in the people that get hired and in the questions that get asked. First author Dr Daniele Fanelli of Stanford University, however, focused on statistical analysis of whether certain sorts of papers were more likely to announce suspiciously large effects.
By looking at meta-anaylses across 22 disciplines that compared the results of different primary studies looking at the same questions, Fanelli concluded there is only a small bias, on average, towards overstatement of effects. However, in certain fields, and for papers published under certain conditions, the problems are greater, sometimes to a disturbing extent.
A matter of major concern is the possibility that the proportion of unreliable publications is rising, something that has been attributed both to increasing pressure on researchers to publish more and the rise in pseudo-journals pretending to be peer-reviewed. Fanelli found evidence of an increase in the proportion of exaggerated papers over time, particularly in the social sciences, but overall concluded: “Most of these bias patterns may induce highly significant distortions within specific fields and meta-analyses, but do not invalidate the scientific enterprise as a whole.”
Small early studies often report quite dramatic findings. Later research frequently finds that the effects described in the original study are real, but smaller than initially reported. For example, medical trials funded by industry might announce a drug worked amazingly well, only for independent research to show a smaller, or non-existent, benefit.