In recent months, we have seen numerous examples of human service systems that have failed – sometimes catastrophically and with deadly consequences.
With all the emphasis on quality in recent decades, why do things go badly wrong as often as they do, particularly in systems dependent on human activities? Weren’t accreditation regimes and performance indicators supposed to ensure that systems work as intended, delivering quality products and services that meet defined standards?
Perhaps it is time to re-appraise the limitations of the quality processes that were supposed to guard against such failures, and identify supplementary approaches that can fill the gaps that are now becoming apparent.
In our experience, significant system failures are very often the end result of lots of little “mutations” in the way processes and protocols are implemented. It’s not that individuals set out to deliberately implement a process incorrectly, but rather that each person introduces small variations in their practice to meet their own requirements or work preferences. Some changes improve the process, but many do not and almost none of the variations are documented.
On their own, these deviations in practice usually go undetected since they don’t appear to cause a problem. Importantly, because they are tolerated, the deviations become the new “norms” of practice. Indeed, staff who are responsible for training new team members are very likely to teach their version of the protocol to others, who in turn add further adaptations over time. Eventually, the cumulative adaptations tip the system over the tolerance threshold.
Unfortunately, since indicators tend to focus on outputs/outcomes or the quantifiable aspects of processes, organisations that rely on indicator measurements to tell them whether all is well will usually not detect these deviations in practice until it is too late.
It’s not that indicator measurement has no value in quality assurance. It’s just that indicators are the wrong instrument to monitor the small shifts in practice that are the root cause of many system failures.
What is needed is a systematic approach to evaluation of business processes that deliberately explores how staff do their daily work and why they work the way they do. Rather than hope that protocol adaptation doesn’t occur, organisations could use systematic process evaluation to periodically re-acquaint staff with the protocols they are expected to follow and help staff to identify and address the issues that prevent them from doing so.
These ideas have informed our thinking in developing the MEERQAT concept. Our tools provide a framework for structured and consistent team-based conversations about business processes. The interactive maps provide information to the team about how a process or program is intended to be implemented, and allow the team to capture empirical evidence about how the process or program is actually being implemented, including the what and why of any variations that have been introduced by staff.
Teams that use our tools report that staff engage well with a graphical approach to process evaluation and develop a sense of ownership both of issue diagnosis and the resulting quality improvement action plan. A major benefit is the organisational learning – both horizontal and vertical – as staff discuss every aspect of process implementation.
Of course, implementing such an approach to quality improvement can only occur if senior managers are prepared to encourage and enable their staff to undertake these discussions. And if one or two hours of structured conversations each week could prevent a catastrophic system failure, it would certainly be worth it.