**Margins of measurement**

Imagine you’re driving a car down a 60km/h limit road. You measure your speed to be 55km/h, but your odometer has some uncertainty in it. You take this into account, and are 99% sure that you are travelling between 51km/h and 59km/h.

Now your friend comes along and analyses your data slightly differently. She measures your speed to be 57km/h. Yes, it is slightly different from your measurement, but still consistent because your odometer is not that accurate.

But now your friend says: “Ha! You were only marginally below the speed limit. There’s every possibility that you were speeding!”

In other words, the answer didn’t change significantly, but the interpretation given in the paper takes the extreme of the allowed region and says “maybe the extreme is true”.

For those who like detail, the three standard deviation limit of the supernova data is big enough (just) to include a non-accelerating universe. But that is only if there is essentially no matter in the universe and you ignore all other measurements (see figure, below).

^{This is a reproduction of Figure 2 from the new research paper with annotations added. The contours encircle the values of the matter density and dark energy (in the form of a cosmological constant) that best fit the supernova data (in units of the critical density of the universe). The contours show one, two, and three standard deviations. The best fit is marked by a cross. The amount of matter measured by other observations lies approximately around the orange line. The contours lie almost entirely in the accelerating region, and the tiny patch that is not yet accelerating will nevertheless accelerate in the future. Image modified by Samuel Hinton, Author provided}

**Improving the analysis**

This new paper is trying to do something laudable. It is trying to improve the statistical analysis of the data (for comments on their analysis see).

As we get more and more data and the uncertainty on our measurement shrinks, it becomes more and more important to take into account every last detail.

In fact, with the Dark Energy Survey we have three people working full-time on testing and improving the statistical analysis we use to compare supernova data to theory.

We recognise the importance of improved statistical analysis because we’re soon going to have about 3,000 supernovae with which to measure the acceleration far more precisely than the original discoveries, which only had 52 supernovae between them. The sample that this new paper re-analyses contains 740 supernovae.

One final note about the conclusions in the paper. The authors suggest that a non-accelerating universe is worth considering. That’s fine. But you and I, the Earth, the Milky Way and all the other galaxies should gravitationally attract each other.

So a universe that just expands at a constant rate is actually just as strange as one that accelerates. You still have to explain why the expansion doesn’t slow down due to the gravity of everything it contains.

So even if the non-acceleration claim made in this paper is true, the explanation still requires new physics, and the search for the “dark energy” that explains it is just as important.

Healthy scepticism is vital in research. There is still much debate over what is causing the acceleration, and whether it is just an apparent acceleration that arises because our understanding of gravity is not yet complete.

Indeed that is what we as professional cosmologists spend our entire careers investigating. What this new paper and all the earlier papers agree on is that there is something that needs to be explained.

The supernova data show something genuinely weird is going on. The solution might be acceleration, or a new theory of gravity. Whatever it is, we will continue to search for it.

Tamara Davis, Professor, *The University of Queensland*

This article was originally published on The Conversation. Read the original article.