Drug trials using animal research suffer from a massive risk of bias, meaning that studies are often over exaggerating the effectiveness of potential new treatments. This is, according to a new study, wasting time and money when the trials for these drugs are taken to the next stage, as well as unnecessarily using animals when potential new drugs are not found to be effectual in later experiments.
The researchers, from the University of Edinburgh, analyzed thousands of studies published over a 10 year period, looking into whether the scientists who conducted the research took basic steps to avoid potential biases during the experiments. These steps included randomizing which animals were given the drugs and which were not; “blinding” the scientists so that they didn’t know which animals had received the drug; producing a conflict of interest statement; and working out the minimum sample size needed for a statistically significant result.
Of the 2,671 papers published between 1992 and 2011 that used animal subjects, the results are quite shocking. They found that only 30% of papers reported blinding, 25% reported randomization, only 12% had conflict of interest statements, and less than 1% reported sample-size calculations. Now, this could simply be down to the fact that the scientists did undertake these measures but failed to report them, or it could mean they did not take these steps and did not mention so in the paper.
“Though sobering, the findings of this paper are not a surprise, as they add to the existing body of evidence on the need for more rigorous assessments of the experimental design and methodology used in animal research,” said Dr Vicky Robinson, the chief executive of the National Centre for the Replacement, Refinement and Reduction of Animals in Research, in a statement.
The study, published in PLOS Biology, then went on to see if the more highly regarded journals published more rigorous studies when compared to the lesser known journals. While you might expect the most-cited journals to be publishing studies with fewer biases, the study actually found the opposite to be true. The researchers suspect that this could be because the studies with more bias tend to have more impressive results and are therefore more likely to be published by the most prestigious journals.
The researchers also looked at the output from some of the top universities in the U.K. They found that out of more than 1,000 papers from scientists at the University of Cambridge, University of Oxford, University College London, Imperial University and the University of Edinburgh, more two-thirds failed to report even one of the four main measures to avoid bias as mentioned above. “You can’t rely on where the work was done or where it was published,” said Malcolm Macleod, who carried out the analysis, at a press conference.
One of the main reasons for this systemic culture of failing to report biases is probably down to the "publish or perish" culture that pervades in many areas of scientific study. Researchers are not likely to announce negative results, and journals are equally unlikely to publish them. These findings do not, however, mean that we should be using fewer animals in trials, simply that we need to be holding these studies to much higher standards, and make sure that they’re enforced.