News and Comment

Quality improvement and evaluation: we need a more collaborative approach

Monday 28 November 2011

By:

Improving care quality is a driving force for our healthcare system. Detailed evaluation of this activity is needed to fully understand what works, in what contexts, and why.

 

But a tension persists between the improvement and evaluation communities. The emergent nature of improvement initiatives, and the changing context in which they are delivered, makes robust evaluation challenging. Improvement experts question what they consider to be overly rigid evaluation approaches designed to isolate the exact mechanisms of change, while evaluation experts question the absence of (or lack of clarity about) the theory underpinning some quality improvement initiatives.

So where does this leave us?

OPM recently took part in a roundtable event, hosted by the Health Foundation, to discuss current thinking about the challenges of evaluating complex interventions to improve healthcare. A report of the event has recently been published. The overarching message is that the improvement and evaluation disciplines can be reconciled if experts on both sides are willing to better collaborate, and inject a dose of pragmatism and flexibility into their approaches.

While randomised control trials and quasi-experimental designs represent the most robust approach to evaluation, evaluators need to recognise that quality improvement programmes operate in the real world, in changing organisational contexts, which can not be tightly controlled.

On the other hand, improvement experts should recognise that evaluation, if used well, can be an effective quality improvement tool. Generating data which gives early indications about what is working (or not), can help prevent wasting valuable time and resources.

Aligning evaluation and improvement

So how can the evaluation and improvement disciplines be aligned in practice? Our discussions highlighted the need for improvement experts and evaluators to:

  • collaborate from the outset to develop a realistic theory of change, and ensure that programme and evaluation design are aligned
  • avoid a ‘conspiracy of enthusiasm’ from the outset, i.e. expecting too much to be achieved. In reality, not all programmes will lead to transformation; some may make modest contributions, and others may lead to change that can only be measured in the longer term
  • align the evaluation focus and design according to the nature and scale of the intervention to ensure that it generates information which will be most useful to decision makers
  • be clear about the difference between stretch goals, designed to motivate clinicians, and evaluation goals, against which success will be judged.
  • be flexible and plan for change, and review the evaluation model regularly.

Given the pressures facing our healthcare system, and the growing demands for different kinds of data from clinicians, managers and commissioners, the time has come for improvement experts and evaluators to overcome our differences and work together to ensure the sustainability and spread of interventions that genuinely improve healthcare for patients.