If you thought monthly data was a disaster for improvement, what is 2-yearly?
Much data is collected monthly (or even quarterly), since this is the way it has always been done. However, is this of value when we are focusing on improvement?
Mandated 2-yearly Patient Safety Culture Surveys
Safety is critical, our staff attitudes to safety, the culture, play a significant role in outcomes of care. Various organisations have created surveys of patient safety, and some accreditation bodies require this. Usually, the requirement is to conduct such surveys at least every two years.
This can be confounded further by not having the most appropriate survey for the healthcare environment. Countries differ, hospitals are different environments to primary care, different to pre-hospital emergency services.
Let's look at the value of 2-yearly surveys to us improvers.
Statistically speaking
Considering many of us live in the domain of quality and improvement, why do we subject ourselves to surveys every one or two years? We know if we are in QI mode, a data point every year or two is insufficient to tell us anything (unless the difference between the two is so extreme it satisfies special cause, like an astronomical point).
Or we wait enough years to have sufficient baseline data plus the eight or more points that could denote a shift on Statistical Process Control (SPC) charts?
Let’s deconstruct the argument
Take a run chart. If we were to use data from our safety culture surveys, to demonstrate a shift (six or more consecutive points one side or the other of the median), including having a baseline, we would need more than ten years (if annual surveys) or twenty years (if two-yearly surveys) to show anything (unless we use a p value and before and after, but we all know this kind of analysis is flawed for our purposes).
Most of us have evolved our chart type to use Statistical Process Control. Here, for a shift, we need eight data points (plus include a few extra to confirm the shift is sustained). To demonstrate the impact of any changes, at least 16 years after the change is introduced on two-yearly surveys!
All assuming we have sufficient baseline data.
Crunching some numbers based on assumptions
Let's look at an annual survey.
Traditional thinking requires us to have 20-25 data points to generate an SPC chart. That's to give us enough of a baseline to calculate the Control Limits - TWENTY to TWENTY-FIVE YEARS!
If we made a change to the system, eight data points later, the shift is visible, several more to confirm it is sustained. Another EIGHT to TWELVE YEARS!
We wait a long time before we have data that tells us we have improvement or not and plan our next improvement idea.
Survey fatigue drives our data collection
So why do we continue to make flawed interpretations as quality and safety professionals? Survey fatigue means it is best to avoid distributing the survey too frequently.
Yet, how often is too frequent, as I'm not sure anybody has tested this?
In the land of Continual Improvement (pedal the 'PDSA Cycle')
Perhaps there are alternative approaches? How about we consider a sampling strategy to survey more frequently? If we attempt to ensure each sample is as similar to each other, then we could conduct the survey more frequently.
The limitation is being able to conduct small tests of change in small cohorts rather than the entire population. Not ideal, but at least we can collect more frequent survey results through spaced, representative samples and test change ideas more frequently using the Plan-Do-Study-Act Cycle.
This could be a DoE (not Duke of Edinburgh, but Design of Experiment or Planned Experimentation). I've been craving an idea to run a DoE in my healthcare environment, and this might be it.