Working smarter or harder?

International Journal of Health Care Quality Assurance

ISSN: 0952-6862

Article publication date: 1 November 2002

263

Citation

Jackson, S. (2002), "Working smarter or harder?", International Journal of Health Care Quality Assurance, Vol. 15 No. 6. https://doi.org/10.1108/ijhcqa.2002.06215faa.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 2002, MCB UP Limited


Working smarter or harder?

Working smarter or harder?

A few months ago, I was facilitating a workshop where the delegates were multidisciplinary professionals working at a strategic level within healthcare. The aims of the workshop were to equip the delegates with skills to apply the European foundation for quality management (EFQM) excellence model into their area of work. During the two-day course, delegates were given the opportunity to apply the EFQM RADAR scoring matrix to a healthcare case study, which was developed from a real life submission document for a regional quality award. For those of you who may be unfamiliar with this matrix there are two main elements, one for scoring the enablers of the EFQM excellence model and one for scoring the results. With regard to the enablers there are three elements to score: "approach", "deployment", and "assessment and review".

An approach is scored against two aspects, one being whether there is a clear rationale for the approach, i.e. based on research findings or an assumption that it will address a particular problem that has been identified, and two, whether it is integrated into the aspects "implemented" and "systematic", which basically looks at whether the approach has been adopted by all those who need to, and whether there is a clear system for rolling out the approach that has addressed the necessary logistical issues. Assessment and review is scored against three aspects, "measurement", "learning", and "improvement". More specifically, there needs to be evidence of the measures that have been taken to determine whether the approach was effective or not (the learning), and if not, whether some improvement action took place as a result. The range for scoring approach, deployment, and assessment and review ranges from 0 per cent to 100 per cent with lower scores being assigned on the basis of "no or anecdotal evidence" and higher ones to "clear" or "comprehensive evidence".

Once the team attending the aforementioned workshop had scored some of the enabler sub-criteria in the case study they then applied the same principles to their own area of work. The findings were remarkable. Mainly because they gave a very clear message of how we often work in healthcare. Let me explain.

The chosen enabler related to the area of "public/patient partnerships" and the scores agreed by the team were as follows:

  • approach –50 per cent;

  • deployment –70 per cent;

  • assessment and review –30 per cent.

The message may not be obvious at first, but having worked in healthcare since 1978, I realised that this is a very common way of operating. For instance, we often plan an approach to tackle an issue or problem and then roll it out with quite a bit of gusto. However, we do not always reach all the parts of the organisation that we need to and often significantly reduce our efforts at around 70 per cent deployment or less. Then when it comes to assessment and review we are less enthusiastic about this because it is viewed as an add-on that requires more "unnecessary" work in an already over stretched environment. The consequence of this means that we begin to make decisions on subjective evidence or one-off symptoms in the organisation, i.e. someone complains that the initial approach, which may be holding focus groups for public/patient partnerships, is not being effective. Consequently, we plan another approach to tackle the same issue i.e. setting up a Website, and then take preliminary concerted efforts to roll that out with again much less effort to measure its impact. Subsequently, before we know it, we have three or four approaches partially in place, which in the case of public/patient partnerships may include a suggestion scheme and/or inviting public and patients to join in on planning meetings, neither of which are assessed and reviewed. More significantly, possessiveness develops for all the approaches in place despite there being no knowledge of which are effective or not.

As a consequence, healthcare personnel have a higher workload than maybe they need to because they are now applying four approaches when the alternative would have been to undertake an assessment and review of the first approach. Practising in such a rigorous way, i.e. following the RADAR logic of the EFQM excellence model, avoids healthcare personnel being influenced by "one-off" symptoms that inaccurately indicate an approach is failing. Conversely, had measures been taken to determine the effectiveness of the original approach and the accuracy level of its deployment, the team would have been equipped to judge objectively the usefulness of the focus groups in their current format. Armed with this information the team could have then decided whether the appropriate improvement action was to alter the format of the focus groups, apply more efforts in the area of deployment, or cease focus groups altogether in favour of a completely different approach.

As you can see, building in assessment and review as a natural way of working would stop us from taking on new, possibly ineffective, tasks in addition to existing ineffective tasks, thereby working smarter instead of harder. A situation that appears much more favourable than the one we have currently.

If anyone can provide a better example than the above or has an alternative view on this way of thinking and working then please feel free to submit your views to me in an editorial format and it can be considered for publication and sharing more widely. In the meantime, enjoy this issue.

Sue Jackson

Related articles