Did you know looking at a sculpture might make you less religious? Perhaps you've heard that writing down your exam worries will help you score better – and of course we all know Google affects our memories.
These conclusions may seem sensational, but they were all based on social science studies featured in the highly-respected and peer-reviewed journal Science. Being published there comes with an implicit guarantee: this claim can be trusted.
There's just one problem: they may not be true.
According to a new analysis published in Nature Human Behaviour, even studies published in top journals should be taken with a grain of salt. When researchers attempted to reproduce 21 different social science studies published between 2010 and 2015 in both Science and Nature, they found that in nearly two-fifths of cases, the original results could not be replicated.
Using methods approved by the original study authors, researchers led by Brian Nosek of the University of Virginia tested the key finding of each paper. To improve confidence levels, they even increased the sample sizes of the experiments by an average factor of five.
Surprisingly, the results could not be replicated in eight of the 21 studies – a rate of 38 percent. And even more dispiritingly, in the 13 studies that did produce results in line with the original findings, the effect measured was only about half that initially reported – something that Nosek believes might come down to the increased sample sizes. With smaller sample sizes, he explained in a press statement, "studies that obtain a significant result are likely to be exaggerations of the actual effect size."
But is there something else going on? Publication bias – the tendency to publish research suggesting provocative and affirmative results over boring results showing nothing out of the ordinary – is a documented problem with worrying ramifications.
"The emphasis on novel, surprising findings is great in theory," explained Richard Klein, who has worked with Nosek in previous replicability studies, "but in practice it creates publication incentives that don’t match the incremental, careful way science usually works."
Although some of the original study authors are objecting to the news that their results could not be replicated, others are more philosophical.
"In hindsight, our study was outright silly," Will Gervais, whose widely circulated 2011 study was one of the eight that fell short, told Vox.
"It was a really tiny sample size, and barely significant... I’d like to think it wouldn’t get published today,” he added.
Gervais had originally suggested that analytic thinking, primed by looking at a Rodin statue among other factors, could influence religious belief – The Thinker is no doubt a work of genius, but it is unlikely to influence your belief in God, it appears.
Other scientists, too, are quite aware of the problem of replicability. Alongside the main experiments, Nosek and his team set up a "prediction market", in which researchers could bet on which results would be reproduced and which would turn out to be spurious. On average, they decided about 61 percent would be replicable – a number astonishingly close to the true rate.
"If the original result was surprising, participants report having a sense that it is less likely to be true," remarked Nosek. "Hence the aphorism that extraordinary claims require extraordinary evidence."
This is not the first time Nosek has challenged the scientific establishment. In 2015, he directed a study aiming to reproduce 100 peer-reviewed psychology papers, with the concerning result that almost two-thirds of the studies had produced findings that could not be replicated. But despite these disappointing results, he's surprisingly optimistic about the future of scientific research.
"Researchers are taking reproducibility seriously and looking for ways to improve the credibility and transparency of their claims," he says. "It’s a very exciting time."
Spokespersons from Nature and Science have said that they encourage authors to explain their results as much as possible to allow better evaluation and replication of the results. Nature has required authors to complete a checklist of information about their methods and conclusions since 2013, reports Buzzfeed.
"The social-behavioral sciences are undergoing a reformation," wrote Nosek in a series of tweets. "Change is occurring by grassroots efforts by individuals and labs, and by policy interventions by disciplinary leaders and editors... Change is also spreading... All fields will benefit from this reformation toward greater rigor and transparency."