Science

The Guardian view on statistics in sciences: gaming the (un)known | Editorial


Statistical arguments are a crucial part of decision-making in a modern society. The kind of decisions that governments and large companies must make all the time are governed by probabilities. In those circumstances of uncertain knowledge we need to reduce a cloud of unknowing to facts as hard and cold as hailstones that can be acted on, or even just used in arguments. But some of the most popular techniques for doing this are now under attack from within the profession.

The p value is supposed to measure whether the conclusions drawn from any given experiment or investigation of data are reliable. It actually measures how unlikely the observed result is compared with what would be expected as a result of random chance. Obviously this requires a sophisticated understanding of the results that chance might be expected to produce. This isn’t always available. To take one popular example, any calculation of how likely we are to be the only intelligent species in the universe depends absolutely on assumptions about the likelihood of intelligent species arising, which can’t be tested across a range of universes.

Even when the background probabilities are well understood, it is possible to wrangle the data until significance appears or disappears to taste. The generally accepted standard of “significance” is a p value of less than 0.05, which appears to translate into a 95% chance that the result is not purely the result of chance. Whether this is a safe interpretation, though, depends on a great many factors, which the number itself cannot reveal. In particular, it is extremely important in getting papers published, something key to any scientist’s career.

Now there is a revolt in the profession against the overuse of this value. More than 800 statisticians – surely a meaningful sample – have denounced reliance on the value. It is time to abandon the notion of statistical significance, they say. It is possible that they don’t really mean this – the likelihood of that is left as an exercise to the reader – but their point is a simple and important one. By using the p value as a binary test of “statistical significance”, scientists and policy makers can miss important correlations. This error is astonishingly widespread: the authors cite a study of 795 published papers where slightly more than half concluded that a p value of more than 0.05 meant that nothing of interest had been discovered.

The problem is not with the technique itself, they argue, but with the fallibility of the humans around it. This is not just a product of the human tendency to split and dichotomise the world. Once p values are recognised as the gold standard of testing, they become subject to Goodhart’s law, that any benchmark that can be measured will be gamed. This is especially true in drug testing and other applications where large sums of money rest on scientific judgments. The answer, of course, is not to abandon p values entirely, but to bear in mind their, and our, limitations. Lord Keynes’ insight comes to mind. He was right that there were matters where “there is no scientific basis on which to form any calculable probability whatsoever. We simply do not know”. Science will never be an infallible means of finding the truth so long as it is practised by humans, who have other interests at stake.



READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.