Experimental designs for quality improvement: A whistle-stop tour - event report

On 21 April 2021 the Quality Improvement Section of the RSS held a meeting on Experimental designs for quality improvement. The meeting was chaired by Shirley Coleman of Newcastle University who also chairs the RSS Quality Improvement Section, and the speakers were as follows:

  • 'Basic principles of Experimental Design – a refresher' by Stephen Ellison of LGC Limited
  • 'Quality improvement using Bayesian adaptive design applied to a digital twin' by Liam Fleming - Newcastle University and Johnson Matthey
  • 'Quasi-experimental design in quality improvement studies - statistical considerations' by Abdel Douiri of King's College London

The statistician Sir Ronald Fisher famously said that asking a statistician to examine data only after an experiment is completed is to invite a post-mortem; the statistician can often say only how the experiment died. This afternoon seminar looked at traditional and new strategies to make sure that quality improvement studies delivered useful results. The event included three speakers from different backgrounds, covering traditional design principles, Bayesian adaptive designs and the idea of quasi-experimental design for assessing healthcare interventions.

Steve Ellison started with a broad overview of some characteristics of successful experiments, from the perspective of measurement in analytical chemistry. These included clear objectives; choice of the right outcome measure; adequate capability to detect the effects expected from an intervention; plans for coping with nuisance effects that might interfere with interpretation; and early consideration of how the data will be assessed. One key point was the idea that improving the power of an experiment is not just a matter of increasing size: strategies for improving power can include reducing the number of different groups; use of blocked or within-subject designs to improve precision; and changing the response measure - for example, using a more sensitive measure, or using a precise predictor rather than an imprecise outcome measure.

Liam Fleming described some of the advantages of Bayesian AdaptiveDesign in the context of chemical manufacturing process improvement. These processes are complex, involving multiple chemical reactions and fluid dynamics. On-plant experimentation is costly and time-consuming, so approximating “digital twins” are constructed using simulation. Simulation is, however, comparatively time-consuming so efficient methods of selecting trial settings are important.  Bayesian Adaptive Design uses the information from initial investigations to choose a new sampling point, usually intended to maximise information gain. The simulated process responses at each new point is used to update the initial information and new sampling points selected. The result is an efficient survey of the region of interest. In a process optimisation example, an interesting feature of the algorithm was that one, relatively uninformative, factor was not varied much during the study, resulting in a better information and reduced prediction errors.

Abdel Douiri described the use of 'quasi-experimental design' in quality improvement studies for healthcare. Such studies include individual interventions, such as changes in medical technology, or broader interventions such as changes in public health policy. Long term outcomes are typically evaluated through long-term observational studies that use accumulated information on patients. These studies can be difficult to interpret because of the impossibility of setting up ‘control’ and ‘treatment’ groups in advance. The idea of quasi-experimental design is to use the principles of designed experiments to select samples for comparison from the wider study population. An example used the data from the Sentinel stroke national audit (SSNAP) to assess the effect of thrombolysis, a treatment for stroke. The study [1] selected ‘control’ and ‘treatment’ groups based on common patient characteristics. This reduces possible interpretation bias due to (for example) different likelihood of treatment according to patient characteristics. In this case, the treatment group showed a convincing increase in survival over the study timescale (about 10 years’ data).

A lively discussion followed the presentations, covering questions such as the problem of experimental follow-up and verification of simulated results from ‘digital twin’ studies, applications of DoE to financial processes, methods of propensity score matching in quasi-experimental design studies, and the practical problems of persuading industry of the benefits of sound design in process improvement.

A video of the meeting is now available to watch on our YouTube channel.
 
1. Muruet, W., Rudd, A., Wolfe, C. D. A. & Douiri, A., (2018). Long-Term Survival After Intravenous Thrombolysis for Ischemic Stroke: A Propensity Score-Matched Cohort With up to 10-Year Follow-Up. Stroke. 49(3), p. 607-613.
 

Load more