Chair of Computational Statistics, School of Mathematics at the University of Edinburgh, Simon Wood, has been involved in the RSS journals for a long time, having served as co-editor of Series B from 2018–21. His main research specialisms are smoothing and statistical ecology/epidemiology. His paper “Some statistical aspects of of the Covid-19 response’, co-authored with Ernst C. Wit, Paul M. McKeigue, Danshu Hu, Beth Flood, Lauren Corcoran and Thea Abou Jawad, is the subject of the upcoming Discussion Meeting on 10 April 2025.
In addition to identifying the many things that went well in the Covid response, the paper discuses some of the important points that didn’t go so well. It thereby hopes to increase the chances of doing better next time.
Here, we chat to him about some of the things he’s hoping to discuss at next month’s meeting...
You point out that a definition of Post Covid-19 Syndrome such as:
‘Signs and symptoms that develop during or after an infection consistent with COVID-19, continue for more than 12 weeks and are not explained by an alternative diagnosis’ is likely to result in a post hoc ergo propter hoc conclusion. Indeed, offering any sort of definition is likely so to do. If we assume that post Covid-19 syndrome (or “long Covid”) became of interest, how would you resolve this problem?
Simon Wood: Something less catch-all is needed. Symptoms that develop after infection, needs some qualifying time window. It would also be better to have a list of core symptoms, rather than having a syndrome that covers anything for which there is not another obvious alternative. If the symptom list is still too difficult to define, and long Covid is anything like as serious a problem as was portrayed in 2020/21, then a cohort study would be justified, in which subjects agreed to be blind tested for Covid each time they had cold like symptoms, and would then report ongoing symptoms.
You discuss how the economic impact of Covid impacts on health; do you have any thoughts on more general economic effects (e.g. manufacturing, or the hospitality sector)?
SW: The full lockdowns, in particular, were obviously very damaging economically. Also the unprecedented halting of much economic activity, while effectively creating money through QE for short term mitigation, was bound to have serious long term effects. It seems to us a better candidate for description as a ‘black swan event’ than the historically moderate Covid pandemic itself. But it is very difficult to attribute exactly what economic damage was done by Covid measures relative to, say, Brexit or the Ukraine war energy shock. This allows those so inclined to argue that there was negligible effect. That seems very unlikely to us. Concretely, a fairly large proportion of the government’s current debt servicing problems would not be there without the cost of the Covid response. Against that, it is important to stress that any sensible measures were going to have an economic impact, but we think the damage was much larger than was sensible or fair.
We concentrated on the knock on health effects of economic disruption because it enables like-with-like comparisons. That was important during Covid, when the general mood was that health was all that mattered. Also it somewhat avoids questions that are, in the end, political and philosophical. In the abstract, most people will agree that there is more to life than avoiding death, but the concrete choices that this implies are very political.
Your discussion of cases vs. prevalence is interesting. Can you give us a sneak preview of some of the major points you want to raise here?
SW: It’s really just the fundamental statistical point that data are only reliable if they are representative of the population that you want to use the data to learn about. And the best way to guarantee representativeness is to use data that are a random sample from the population of interest. The Covid case data, that saturated the media in 2020/21, were absolutely not such a representative sample, let alone a random sample. This becomes pretty obvious if you just plot the case data against what came out from the proper random sampling conducted by the ONS.
One disturbing feature of cases is that in each wave they appeared to accelerate faster than the reliable ONS measurements suggested: if you are focussing on cases then there is an obvious danger of over-reaction.
You are critical of the press for some blatantly untrue (verging on stupid) comments. Sadly, isn’t that what we expect from the press?
SW: I’m not sure. I agree that the press often gets things I know a bit about badly wrong, often because of the time pressure under which journalists work. They don’t have time to really understand, before moving on to something else. But Covid was a bit different – it was the story for months – there was time. Also I think it is reasonable to expect better from major reputable news organisations. This is also something for academics to examine their consciences about – some disturbing nonsense in the media came directly from academics, as we mention. Zero Covid is an example.
A generic problem with the media is the preference for clear simple intuitive answers delivered with certainty. So nuanced complicated answers that acknowledge uncertainty tend to lose out to those provided by pedlars of simplistic certainty, irrespective of truth.
Indeed, a disturbing aspect of the press coverage was how obvious it sometimes made the ‘marketplace for confirmatory evidence’. It often seemed that an outlet’s politics determined its position, which in turn determined the scientific evidence it would consider or report. Politics-based-evidence rather than evidence-based-politics. For example, from my perspective, the
Guardian did a poor job over Covid, while the
Spectator was careful and evidence based. But the
Spectator’s careful evidence based approach does not seem to extend to climate change...
With regards “lockdowns”, what you say, ‘The view that nothing preceding lockdown had worked, because cases and deaths were still increasing until lockdown…’, is interesting. Whilst it may be so that cases and deaths were still increasing before lockdowns, did people think that no prior measures had worked at all, or rather that they weren’t working enough?
SW: I don’t think that the question was seriously addressed. The models said we needed lockdown, and there was an overwhelming clamour for it on social media.
Do you have any comment on the UK government’s insistence that they were ‘following the science’?
SW: This was a fig-leaf. The policy makers trying to hide behind the modellers by claiming that science dictated what had to be done, while the modellers hide behind the policymakers, claiming that they offer no actual advice and have no say on policy. Neither is true. If you produce modelling, or other science, for policymakers you will influence policy and know it. At the same time, the policymaker’s job is to balance scientific advice with wider societal issues – avoiding deaths from one particular disease is not the only responsibility of government, so ‘the science’ cannot be the only determinant of policy.
I think that the Government followed the part of the science that confirmed that the policies that had been implemented had been right: and that science was overly dominated by epidemic models. If they had followed previous scientific consensus, then lockdowns would have been rather unlikely.
It’s worth reading Dominic Cummings’ evidence to the Covid enquiry, however. What’s described there is very difficult to describe as ‘following the science’ in any normally understood way.
Being a little self-critical, if you will, how much do you think you have benefit from hindsight, and the fact you have maybe had more time to think about these matters than those working on them during the pandemic?
SW: I said many of the things we write in the paper at the time, but utterly failed really to get them heard. The work on infections peaking before lockdown was from early May 2020 (people on SAGE knew about it), and before that several of us had tried to get many of the points aired in the media. Later, the
Spectator did publish some of my articles on the cost of life, modelling and peak timing, and mask evidence, and I was interviewed on
BBC Scotland about the importance of proper sampling, rather than relying on CASE data (Scotland was late to the ONS survey), and the deficiencies of the SPI-M omicron models. But obviously it is much easier to be critical of policy than to be responsible for making it.
Where hindsight really comes into play is that much of what we say can be said with more certainty now than it could be early on. There is more evidence on the timing of infection waves relative to lockdowns, and more evidence that the epidemic models were not very good at prediction, and more evidence that long-Covid is not the enormous problem that was presented. Similarly the notion that we would ‘build back better’ and not have major economic consequences is not one that most people would now take seriously.
How can statisticians better contribute or inform future pandemic or outbreak responses in real time? Are these issues avoidable at the time in data-sparse scenarios?
SW: Getting proper measurement prioritised very early on, and pointing out the dangers of instead relying on un-validated models. Pushing for something closer to best practice in the combination of evidence and the weighing of uncertainty and societal risk. Contributing to rapid critical evaluation of policies in the light of available data as it becomes available. Calling out poor use of statistics clearly and firmly before damage is done as a result. Speaking up.
I don’t think there were enough statisticians involved relative to epidemic modellers, for example. Statistics is about extracting information from data and reasoning in the face of uncertainty, and these things are surely central to any attempt at science based policy in an epidemic. We probably need to be more assertive about the fact that it is measurement and statistical methods that turn the mathematical speculations encoded in models into science. A culture shift from viewing a statistician as someone who is useful for improving parameter estimates to be fed into ‘the models’ would be helpful.
Obviously much of what the ONS did was exemplary, and internationally I am still baffled by the fact that other countries were not directly measuring the size of an epidemic considered so serious that it justified shutting down society for months. Clearly statisticians need to be much better at communicating that not all data are useful measurements.