More or less misleading

We must fact-check the government’s claims about its coronavirus testing regime. But who will check the checkers?

Features
A nurse in PPE with an official Covid-19 testing kit in April (©PAUL ELLIS/AFP via Getty Images)

In the world of the media, “fact-checking” is often viewed—and sometimes ridiculed—as an American obsession. Yet there may be moments when we all wish that the factual claims that reach us could be put through some kind of objective filter, to take out half-truths, errors and lies.

For the last five years, the BBC has run its own “anti-disinformation service”, called “Reality Check”. But its record is mixed, and its objectivity was tarnished in many people’s eyes by its one-sided concentration on debunking pro-Brexit arguments. In the whole output of the BBC, however, there is one shining beacon of excellence when it comes to challenging misreporting and correcting misunderstanding: the Radio 4 programme More or Less, which investigates claims of all kinds about statistics and is presented—in a more or less tolerably faux-naif way—by the economist Tim Harford. Here at least is a gold-standard programme, utterly objective in its puncturing of myths, wherever they may come from.

Or so I thought. Others have thought so too. During Harford’s period as presenter the programme has received many accolades, including the prize of the Royal Statistical Society for “statistical excellence in broadcast journalism”. Has success led to complacency, and to negligence about—of all things—checking facts? There is a motto in politics, “Who will guard the guardians?” Listening to More or Less in recent weeks has made me wonder: “Who will check the checkers?”

It all began with an item on the Government’s coronavirus testing regime. The basic backstory here will be familiar. At the start of the pandemic, the UK was very under-prepared for mass testing; large-scale capacity simply did not exist. Eventually, Secretary of State Matt Hancock announced on April 2 that he was setting a target of 100,000 tests per day for the end of that month. And on May 1 he declared that the target had been met on the previous day. The numbers fluctuated for a while thereafter, but by the end of May the next target, of 200,000, had apparently also been achieved.

The entire testing programme was a composite of different kinds of test. A policy document setting out the details, “Coronavirus (Covid-19): Scaling up Testing Programmes”, was published on April 4, dividing the tests into four categories. Two were diagnostic, to find out whether a given person currently had the disease; one of these consisted of in-house tests in hospitals, the other of tests—typically, swab tests—conducted elsewhere or posted out to individuals. And the other two categories were antibody tests, to see if a person had had the disease; some were for research and “surveillance” purposes, plotting the spread of the virus, and not for yielding information about individuals.

It is understandable that all four categories were added together to produce the total number of tests on any given day. The policy document clearly explained this. Of course the headline figure given in daily announcements did not descend to that level of detail, so members of the public may have thought that the total consisted entirely of diagnostic tests. But there was no misinformation there: the policy had been clearly stated, and was adhered to.

Much less understandable was the decision to include in the count all the postal tests that had been posted out, assigning them to the day they were posted—which ignored the fact that some of them may not have been returned. This point was put to the man in charge of the programme, Professor John Newton, when the 100,000 target was officially reached; he confirmed that they were counting them out but not counting them in, and defended the practice as statistically correct, though without giving any clear reasons.

Another criticism that later emerged was that the figure for the number of tests was larger than the number of people tested, as some diagnostic testing involved giving a person two different tests at the same time. The fact remained, however, that the original target had always been stated in terms of the number of tests, not the number of people tested.

Enter More or Less. It first discussed the issue on May 6, noting that when Matt Hancock had triumphantly announced the reaching of the 100,000 target on April 30, he was including postal tests sent out that day, rather than tests actually completed. By then, this was not an original point: the criticism had been widely made in the media, and Prof. Newton had offered his defence. More or Less could have done something useful if it had questioned Prof. Newton about this, but it preferred to score an easy point, already made by others.

Matt Hancock at Downing Street for a Covid-19 briefing (©Mark Thomas / Alamy Stock Photo)

Having got its jaws around this bone, however, More or Less could not let it go. One week later the presenter, Tim Harford, conducted a little interview with his own producer, Kate Lamble, who was apparently now serving as resident expert on the subject. When he asked whether the Government had “acknowledged” that “a test in the post isn’t the same as a test completed”, she answered: “No, they haven’t.” This was quite misleading, as Prof. Newton, who spoke for the Government’s testing programme, had been on the BBC acknowledging that there was a difference between sending out a test and completing it on its return. But then concern for accuracy at More or Less took a very strange turn for the worse.

If we eliminated all the postal tests from the total, Lamble declared, it would become clear that the Government had never reached its target—neither on April 30 (when it had sent out roughly 40,000 of them) nor on any subsequent day. Listeners up and down the land must have been scratching their heads at this: it is one thing to say that postal tests should not be counted on the day they are posted out, but another to say that not a single one of them should be counted, at all, ever. For a moment, Harford seemed to be voicing the listeners’ concern: he asked whether the Government might now be meeting its targets, if one took into account the tests that were returned. His producer’s reply was: “Unfortunately we have no idea.” And a little later she referred to “these tests in the post, which we’re not counting”.

This was a very handy way of making tens of thousands of tests disappear on a daily basis. It rested on a confusion so elementary that one has to wonder about the frame of mind in which this award-winning programme is put together—a confusion between “there is no accurate figure for X” and “therefore X equals zero”. Lamble assumed, in effect, that no postal test had ever been returned, even though Prof. Newton had said, on Radio 4 on May 2, that “the home kits are very popular, it’s what people ask for”. The natural thing for a serious statistician to do would be to make an assumption about what percentage of posted tests were returned, fix an average turnaround time between the original sending out and completion of the test in the lab (say, three days), and then add in the figures on that basis. Or else give a range of possibilities: “if we assume a return rate of 75 per cent . . .”; and so on. But to assume a rate of 0 per cent looks like an act of manipulation more gross than any trickery of which the Government has been accused.

Was it just an act of inadvertency—a slip of the tongue, even? Unfortunately not. In the following week’s programme, the same point was made: “If you remove the posted kits, because we don’t know how many were actually carried out . . .” And now Lamble went a step further, explaining that the Government could be shown not to have reached its target if one removed both those posted diagnostic tests, and all the ones made for research purposes.

This point too was reinforced in the next week’s broadcast, with a little scripted dialogue. Harford: “How many times does the Government say it has carried out 100,000 tests per day this week?” Lamble: “Six.” Harford: “How many times, if you take away postal tests and non-diagnostic research tests?” Lamble: “None.” If these two champions of factual accuracy had ever actually looked at the document in which the Government had set out its testing programme, they would have noticed that the non-diagnostic research tests were always intended to be part of the total. It is really not very difficult to denounce the Government for missing its target, if you just change the target by subtracting categories at will.

However, this broadcast did appear to offer a rudimentary justification of Lamble’s decision to exclude all the posted tests from the total. Now, sounding like a proper statistician at long last, she said that 91,000 of the postal tests, and of those sent to drive-in “hubs”, had been found to be “voided”. Certainly, 91,000 is a statistic, and seemingly a large one; but what was her source? Internet searches revealed only one likely source, a piece on the Spectator website which gave this figure; but it added another vital piece of statistical information—which, strangely, Lamble did not repeat—when it said that the 91,000 represented “over 8 per cent” of the (cumulative) total in that category. This suggested that more than a million postal and drive-through tests had been sent out, of which well over 900,000 were completed successfully. (The Government later announced that by the beginning of June 1,234,675 tests had been mailed out to the general population.) That figure of 900,000-plus seems to give us an order-of-magnitude figure for the amount by which More or Less had misrepresented the facts.

And that was just the posted diagnostic tests. Two weeks later, Lamble added that in order to decide whether the Government had met its targets, we must also subtract all the antibody tests. “Remember,” she said, “that antibody tests, like pregnancy tests and spelling tests” (yes, More or Less prides itself on its witty humour too) “are not the kind of swab test the Government was originally talking about. So they’ve still decided to include oranges in their apples target.” Yet the policy document defining the target had always included these. This was just another misrepresentation by More or Less.

With evident satisfaction, Harford and Lamble referred several times in these broadcasts to two critical public letters sent to Matt Hancock by Sir David Norgrove, the head of the UK Statistics Authority. Those letters did complain, rightly, about non-user-friendly spreadsheets giving the detailed information. But most of their criticisms were about how the target and the totals were summarised at the daily briefings—they were about headlines and slide captions, not about whether the target had been met in terms of the Government’s own detailed definition of it. Sir David was referring generally to the jump from details to headlines, and also specifically to the counting of tests sent out rather than tests sent back, when he penned his most critical phrase: “The aim seems to be to show the largest possible number of tests, even at the expense of understanding.”

Fair comment. But did that justify the approach of Lamble and Harford, whose aim seemed to be to show the smallest possible number of tests, even at the expense of understanding? If 8 per cent of the posted tests are wrongly included by the Government, does that justify excluding not only those but the other 92 per cent as well? Which side of this story is being more or less misleading? (If there is a simple answer to this, in favour of More or Less, it is not apparent; nor did I receive any reply when I wrote to them, making all these points, after each of the broadcasts described above.) And if the BBC’s premier fact-checking programme can mislead like this, what trust can we have in the rest of its output?