On 24 April 2018, RSS President Professor Sir David Spiegelhalter gave the third annual lecture for the Independent Press Standards Organisation (IPSO). Here is the content of his lecture for those who were unable to attend.
-
Just one alcoholic drink a day will shorten your life, study showed (Evening Standard, 13/4/2018).
The study actually showed that one alcoholic drink a day had no impact on survival.
-
Why marmite could prevent miscarriages and birth defects (Daily Telegraph, 10/8/17).
This was a study on mice that had been genetically modified to have miscarriages. No known results on humans (and no marmite was used).
-
UK unemployment falls to 1.44 million: UK unemployment fell by 3,000 to 1.44 million in the three months to November, official figures show (BBC Online, 24/1/2018).
The 3,000 ‘fall’ is an estimate with a margin of error of +/- 77,000: we have essentially no idea whether unemployment has gone up or down.
|
The UK has some of the best science in the world. The Office for National Statistics is highly respected, and scientists come fifth in Ipsos-Mori’s Veracity Index at 83% – somewhat higher than journalists at 27%.
But this quality of science is not always reflected in the stories that appear in the media which, as shown above, can be incorrect, exaggerated and over-confident. The least reliable science can get the most coverage, oscillating between claims of breakthroughs and new risks. And even if the science is excellent, headlines may pay scant regard to accuracy.
As will become clear, I don’t particularly blame the specialist journalists, who I think on the whole do a good job in a challenging and changing media environment. I should also admit that this is a highly subjective commentary, full of personal anecdotes, without data, and evidence that has been carefully selected to support my argument.
Why coverage matters
Although people claim not to trust the media, they are influenced by its content and tone. Ipsos-Mori’s Perils of Perception survey, started in 2013 by the Royal Statistical Society, provides an annual snap-shot of the misperceptions of the public: for example, only 19% said the murder rate had gone down since 2000, whereas in truth it has reduced by nearly a third. The respondents estimated that 19% of teenage girls got pregnant every year, whereas it is actually 1.9%. The media must carry some responsibility for this: the staggering 60% decline in teenage pregnancies over the last 20 years receives scant attention.
Of course, as the late Hans Rosling argued so forcefully, good news gets little coverage: Max Roser from Oxford points out that newspapers could have legitimately run the headline 'Number of people in extreme poverty fell by 137,000 since yesterday' every single day for the past 25 years, but the novelty would rapidly wear off.
We have to accept that the media is driven by what people want to read. But it’s important to remember that misleading headlines have been followed by people measles cases in Europe.
Bu even more than these practical effects, I believe there is a more basic principle at stake. In an age of data-driven information, and misinformation, the media have a vital role in helping sort the signal from the noise (and the fake signal).
The good
It’s not all bad news. We should be grateful for many aspects of the media treatment of science and statistics in the UK, particularly when covering contested issues. We no longer see comments about Frankenstein foods, we’ve exported Andrew Wakefield and vaccine scepticism to the US, we are spared the German fear of nuclear power and the French of nano-technology, and the anti-science rhetoric prevalent elsewhere.
Good science can receive prominent and accurate coverage. The media is also supportive of moves to improve openness of data, such as the Royal Statistical Society’s successful campaign to stop official statistics being released ahead of publication to various interested parties around Westminster, and for clarifying the rules of ‘purdah’ around elections, which have previously led government scientists to be unable to publicise their basic apolitical research.
The BBC is investing in their data team and has appointed an RSS ambassador, Robert Cuffe, as head of statistics for BBC News. Fact-checkers such as Full Fact, BBC’s Reality Check and Channel 4’s FactCheck are active, and the media naturally enjoy the spectacle of a minister or department being called out by the UK Statistics Authority for being naughty with the numbers. There is also some good data journalism in the UK (although still lagging behind the US) celebrated by the coveted Royal Statistical Society’s Statistics in Journalism awards. PR-driven ‘surveys’ are generally weeded out, and there is a refreshingly low appetite for the sort of popular psychology studies that feature so prominently on Ted Talks.
I believe a major reason for this good news is the willingness of UK scientists to engage with the media, strongly encouraged by the Science Media Centre. On receiving press releases under embargo, many statisticians and scientists are willing to drop everything, study the press release and article, and provide comments within a few hours. The most gratifying experience - even better than being quoted - is when some shabby claim from a non-peer reviewed poster at a conference, say about some new cause of autism, receives so much criticism that it is kept out of the news entirely.
The bad
Stories can be distorted all along the media pipeline, from the scientists and statisticians who analyse the data, through the organisations that promote their work, the press officers, the journalists and the sub-editors.
The pipeline of data-based evidence: exaggerations and distortions can occur at each stage between those producing the data and the public consuming the story
Although the ‘crisis’ in reproducibility may have been exaggerated, scientists can have agendas and reach inappropriate conclusions. The data in a recent study showed that people who reported drinking at the government alcohol guidelines (around a pint of beer a day) had faster reaction times than those who drank either more, or less, alcohol. But the investigators somehow concluded that guidelines should be reduced, leading to the classic Sun headline: 'Boffins claim: 1 pint a day will give you dementia', completely the opposite of what the actual data said.
Scientific publications can select what to publish based on novelty and interest, and peer review is variable and imperfect – researchers get daily emails inviting them to submit to so-called ‘peer-reviewed’ journals. Press releases are only issued for the most ‘newsworthy’ studies, and may sex up the story: Why going to university increases your risk of a brain tumour'.
My experience is that science journalists generally want to do a good job and try and respect what can be claimed, but frankly the same cannot be said of some sub-editors – just look at the instances already quoted. A particular irritant is the false quote, presumably intended as a terse summary of an opinion, but open to abuse. As just one of many possible examples, a well-crafted story about the lack of strong evidence that light drinking in pregnancy causes harm, was given the front-page headline 'Light drinking ‘does no harm in pregnancy' (Times, 12/9/17, online headline now changed), a claim that nobody said and nobody thought. Worse, the only person directly quoted in the article was me. After numerous complaints to the paper and IPSO, an apology was issued a week later, in small print on page 34. Sometimes I sense an attitude that it doesn’t really matter whether the headline is right or not – after all, some boffin will no doubt come up with some opposite claim next week.
Particular problems in reporting science and statistics
Out of many possible grumbles, four stand out for me:
-
There is not enough acknowledgment of the stage of the scientific process: there is a huge distinction between early animal work, and the results of large clinical trials or a combined analysis of all the major studies
-
Too much notice is taken of single studies, or the change from a previous period of a single statistic such as unemployment or crime. These isolated results are rarely ‘significant’, in any sense, and context and longer-terms trends are needed for reliable conclusions
-
Real life is complex. Medical treatments have harms and benefits – two people faced with exactly the same evidence could come up with entirely different choices as to what is best for them. But treatments tend to be declared as either a breakthrough or a scandal
-
Uncertainty is ignored. The substantial margins of error on unemployment and migration statistics can mean that many column-inches are spent trying to interpret changes that may be illusory.
Telling statistical stories
Journalists, quite reasonably, want to translate statistics into something that applies to the individual reader. Sometimes this doesn’t seem fair to statisticians – those who talk about dinosaurs or distant galaxies are not asked to do this. But there is a responsibility to use numbers in a way that makes sense to people, in order to put the conclusions in perspective. Research suggests an effective idea for communicating risks is the idea of ‘expected frequencies’ – what does this mean for 100 people similar to you?
But there are some fundamental difficulties with story-telling from data. Classic narratives have an emotional hit to the reader, they reveal a clear causal path, and have a neat conclusion. But science and statistics are not like that – they can seem impersonal, they don’t have a clear chain of causation, and their results are often ambiguous.
What could be done?
These problems are not the responsibility of any single group or organisation, and improvements could be made all down the media pipeline. Government could improve transparency and fairness by ending pre-release and ensuring purdah is not over-interpreted. Scientific journals can curtail the extravagant claims of their authors. Press officers can openly report caveats, and explore the labelling system developed by the Science Media Centre. The Royal Statistical Society can continue to celebrate statistics through media-friendly initiatives such as Statistic of the Year. The recent House of Lords report on polling recommended that the RSS work with IPSO and others to provide training for journalists to ask the right questions when judging the quality of data.
The media could follow the BBC in increasing support for data-driven journalism, defer to their specialist reporters, and keep sub-editors on more of a leash. And press regulators can come down hard on practice that breaks their code, such as including false and misleading ‘quotes’ in headlines. IPSO has started producing resources and guidance for journalists and editors, and it would be timely for them to produce guidance on statistics: the RSS fed into the development of the BBC’s editorial guidance on statistics and would be pleased to help others such as IPSO in a similar endeavour. Guidance for science reporting drawn up by the Science Media Centre for the Leveson Inquiry already exists.
It requires considerable skill to report data-based claims in a way that is engaging and yet true to the evidence. I have reasonable confidence in the journalists: it remains for the rest of the pipeline to rise to the challenge.
Careless statistical reporting could cost lives.
-----------
This lecture was delivered at the Royal Statistical Society on 24 April 2018. Watch the livestream of the lecture on IPSO's Facebook page.