Mark Spiegelhalter's book helps explain how implausible statistical nonsense arises in the first place

Do you thrill to the word “statistics”? Do you trust headlines telling you that having a Waitrose nearby adds £36,000 to the value of your house? Or that bacon, ham and sausages carry the same cancer risk as cigarettes? No, nor do I. That is why we need a book like this that explains how such implausible nonsense arises in the first place. Written by a master of the subject, it gives a historical overview of statistics and its handmaid probability, one of its first lessons being to decry the false notion that the numbers speak for themselves. They do not. Interpretation is vital, and so is the way they are derived.

To illustrate the former point, Spiegelhalter gives a table using admission data for STEM subjects at Cambridge in 1996. It shows that in each of Medicine, Veterinary Medicine, Computer Science, Economics and Engineering the acceptance rate for women was higher than for men. So far, so clear — yet the combined data from those five subjects showed that men had a higher acceptance rate overall. Surely one of those claims must be false, and I had to stare at the figures for a couple of minutes before I understood. In such situations the two populations (male and female applicants in this case) have different proportions among the sub-populations. In this example the acceptance rates for engineering were 32 per cent for women and 26 per cent for men, both higher than the overall acceptance rates, but there was a massively higher proportion of male applicants in engineering, so the 26 per cent for men had a far stronger pull than the 32 per cent for women. This may seem an odd case, but you can imagine how easy it is for editors and headline writers to give a false impression.

These numbers were factual, but those derived from surveys are more problematic. There are heaps of ways for a survey to be badly designed: convenient but non-representative samples (e.g. telephone polls before an election); misleading wording of questions; failure to make fair comparisons (e.g. only using volunteers); surveying a population that is too small; and failure to collect contradictory data. As Ronald Fisher, one of the founders of modern statistics once put it, “To consult the statistician after an experiment is finished is often merely to ask him to conduct a post-mortem examination. He can perhaps say what the experiment died of.”

One of the great uses of well-designed analysis is to show what causes what. A mere correlation is not much help, and one should be wary of misinterpretation: for instance, a link between tax and health records in Sweden over 18 years led to the curious but rather dull conclusion that men with a higher socioeconomic position had a slightly increased rate of being diagnosed with a brain tumour. Some wag in the university press department reported this as “High levels of education are linked to heightened brain tumour risk”. So going to university increases the risk of a brain tumour? What nonsense. This is one of many intriguing little examples in the book, but there is a serious side to causality, such as the fact that smoking really does increase the risk of lung cancer. The establishment of such facts is called epidemiology, a word derived from the same Greek root as epidemic, and an epidemiologist is a type of statistician.

Although statistics is essentially a very old subject — consider the Biblical census established during the reign of Caesar Augustus — probability is much newer. In the 1650s the Chevalier de Méré wanted to know which of two dice games had the greater chance of winning: throw one die at least four times, winning if you get a six; or throw two dice at least 24 times, winning if you get a double six. These days we know how to work out the odds: it’s about 52 per cent in the first game, and 49 per cent in the second. But probability still causes problems for most people, and Spiegelhalter gives a nice example: you have three coins — one has two heads, one has two tails, and the other a head and tail. You pick at random, toss the coin, and the result is heads. What is the chance the other side is also heads? Apparently, most people say 1/2 because the coin must be one of two, and each was equally likely to be picked. That answer is wrong. The correct answer is 2/3.

That reminds me of a maths lecturer who interviewed prospective students. He tossed a coin: once, twice, thrice, and more. Each time it landed heads and he asked the interviewee what the chance was that the next toss would land heads. Most said one-half, but what they should have done is ask to inspect the coin — it was double-headed. This book tells us to examine our assumptions. Bravo.

“For many, the end of this uneasy year cannot come quickly enough”

Ian Cobain’s book uses the killing of Millar McAllister to paint a meticulous portrait of the Troubles