I have just returned from the ACHSM/ACHS Joint Congress 2017 in Sydney, where the theme of the conference was “Winds of Change”. A common sub-theme in many of the sessions I attended was the need to change the current safety and quality trajectory of healthcare organisations. Indeed, in the panel I participated in following my own presentation, the first question was about why, after two decades of effort, we had succeeded minimally – if at all – in reducing the rate of preventable patient harm in healthcare organisations.
My own view on this is that current approaches to quality assurance and quality improvement are not doing the job they were supposed to and some new approaches are needed to complement the existing regimes. I am particularly familiar with this in the health sector, but I suspect the same issues are at play in other sectors that rely on the combination of accreditation and performance indicators.
I doubt I am alone in reaching the conclusion that the current regimes aren’t working. And I may not be the only one frustrated that the proffered solutions always seem to involve re-doubling our efforts with current approaches, namely, strengthening accreditation and reporting more frequently on more reliable indicators. In other words: more of the same, only better.
Such a response is – unfortunately for the future patients whose harm could have been prevented – doomed to fail. And it will fail because, regardless of how good the accreditation regimes are, accreditation is not a guarantee of what will happen at the coalface of service delivery. And also because reliance on indicators – even the best of indicators – is akin to measuring the number of horses that have bolted through the open gate.
Our faith in accreditation and performance indicators rests on two heroic assumptions. The first is that systems and processes, once accredited, will be implemented as intended. Speaker after speaker at the ACHMS/ACHS Congress presented evidence that this is not the case. Indeed, there are actually terms for this: “unwanted variability”, “protocol variance” and “normalised deviance”, to name but a few.
The second assumption is that indicators are an accurate read-out on what actually happens at the service delivery coalface. One problem with this assumption is that much “protocol variance” slips by under the radar, not immediately resulting in a measureable bad outcome. Indeed, this is how “protocol variance” becomes “normalised deviance”.
Another problem is that compliance regimes based on achieving particular indicator benchmarks can provide a powerful incentive for behaviours that achieve the required indicator result but which are clinically questionable. For example, one way to ensure the hospital’s Emergency Department achieves its benchmarks for patient waiting times is to send home patients who might otherwise wait a long time.
The bottom line is that a good indicator result doesn’t necessarily mean that everything is working the way it is supposed to.
Even if more reliable indicators could be found, using indicators to flag safety and quality issues is – as noted earlier – an inherently reactive approach. And while improving the response to poor indicator results is a laudable goal, surely the objective is to get better at preventing avoidable harm, not to get better at responding to bad outcomes. If so, then we need approaches to quality and safety that allow us to identify potential issues in routine practice before they manifest as harm to a patient, that is, a preventative approach.
A preventative approach to safety and quality wouldn’t replace the current accreditation and performance monitoring regimes, rather it would complement them. A preventative approach would encourage staff to regularly compare their daily practice with the accredited policies and protocols of the organisation and help to ensure accredited protocols are implemented as intended. It would proactively address issues before they become bad outcome data in an indicator result and would give stakeholders more confidence that indicator data accurately reflects actual practice.
Importantly, we don’t need to look too far to work out what is needed for a preventative approach to safety and quality in healthcare. Thought leaders in the sector have been writing about these issues for several years.
In 2013, a report from KPMG on clinical governance identified four essential building blocks for high reliability healthcare organisations, namely (1) a culture devoted to quality; (2) responsibility and accountability of staff; (3) optimising and standardising processes; and (4) measurement of performance. In the same year, an article in the Milbank Quarterly by Chassin and Loeb identified three common features of high reliability organisations, namely (1) leadership commitment to achieving zero patient harm; (2) a fully functional culture of safety across the organisation; and (3) widespread deployment of effective process improvement tools and methods.
Four years on and most healthcare organisations have yet to bite the bullet. In the meanwhile, at MEERQAT, we have been busy developing effective process improvement tools that will help health services to optimise and standardise their processes, at the same time developing the responsibility and accountability for quality of their frontline staff.
We’re ready whenever health services are.