It is now almost 100 years since Walter Shewhart started work at the Western Electric Company Hawthorne Works. His work there to develop quality control methods based on measuring and understanding variation in industrial systems was foundational, greatly influencing industry and management in the 20th Century. Many people would recognize today the statistical process control charts that he developed and still use the simple heuristics he developed to interpret them. Shewart’s work greatly influenced the ideas of William Deming, whose “Theory of Profound Knowledge” has been adopted extensively in healthcare quality improvement, including the single most influential concept in healthcare quality improvement – the Institute of Health Improvement’s “Model for Improvement” and the idea of the PDSA cycle.
It is no exaggeration to say that the Model for Improvement has become dogma, included in almost every teaching course and programme of healthcare quality improvement – the national programmes to improve the NHS in England, Wales and Scotland for example all borrow heavily from this model. Using and understanding data is a central component of the Model for Improvement. Here the role of data is in measuring the effects of tests of change – for example measuring the change in number of hospital acquired infections after implementing a new catheter checklist. The role of data in the Model for Improvement is limited largely to simple measurement.
With its focus on rapid cycle tests of change, the model promotes the idea that measurement should be done little and often, using small samples to assess change in a small number of metrics – ideas that Walter Shewart would also have espoused as he stalked the production line with his clipboard, carbon copy and pencil. There were perfectly good reasons for this approach in 1918 and many would argue that these still apply in 2017. Data can be burdensome to collect, and the delay between data collection and its availability so long that it is not useful to measure continuous improvement. Having more data does not necessarily lead to greater insight, and collecting data can distract from the actual work of improvement. However, the world has moved on in many ways since 1918, and it is important to ask if we should reappraise the role of data in quality improvement in this age of Big Data.
There are several reasons why I think we should think again. As the marginal cost of collecting and storing electronic data falls towards zero, the idea that data for improvement should still be “little and often”, limited in scope and based only on small samples, looks less convincing that it did in the past. Rules that were appropriate in an age of pen and paper measurement look less relevant in the digital age. Indeed, there are reasons to think that holding on to the traditional approach to using data is actively problematic. For example, the “little and often” approach to measurement ignores the problems that come from sampling: achieving unbiased samples of data in the real world is actually quite hard to do for example. Even “little and often” measurement is still more burdensome and slower than completely automated data collection and analysis – which might seem like a pipe dream in many healthcare organizations but can still be an aspiration to which organizations work towards. For me, probably the biggest limitation with the traditional way of thinking about data in the world of quality improvement is that it demotes data to the role of simple measurement: just a dumb ruler to measure change by. The role of data to help people understand the systems they are trying to improve, learn how and what to improve, plan their interventions and tests of change or simulate them in advance, all become possible when we have much richer and detailed data, and use data in more sophisticated ways. I believe this to be a missed opportunity. Shewhart’s ideas have taken us a long way, but it’s time to think again.