Empirically strong quality improvement programmes

International Journal of Health Care Quality Assurance

ISSN: 0952-6862

Article publication date: 1 October 2006

337

Citation

Hurst, K. (2006), "Empirically strong quality improvement programmes", International Journal of Health Care Quality Assurance, Vol. 19 No. 6. https://doi.org/10.1108/ijhcqa.2006.06219faa.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 2006, Emerald Group Publishing Limited


Empirically strong quality improvement programmes

Again, we’re fortunate to publish articles based on and connected by systematically collected, robust evidence. Cesarotti and Di Silvio, for example, tackle an unusual and challenging topic – facilities management (FM) – notably the quality of hospital cleaning services. Although they write from an Italian perspective, their method, findings and recommendations are easily rolled out to other countries. Although not included in their article in any detail, owing to space limitations, the authors begin by conducting a systematic literature review, which alone generates valuable insights into FM. Outcomes are triangulated with quantitative and qualitative data from fieldwork (involving most stakeholders) conducted by the authors. Consequently, we’re given a treatise on standard setting, testing, and to complete the audit cycle, service improvement. At our request, the authors compared and contrasted their “Italian standard” with ISO 9000-2000 – clearly resulting in a bonus for our readers.

Siddharthan and his colleagues, using a large survey of hospital workers, take a look at work-related injury – notably staff under-reporting problems associated with lifting and handling patients. As they underline, the problem’s seriousness shouldn’t be underestimated. A recent UK-wide survey (Hurst, 2005), for example, showed that an extra 0.1 per cent sickness equates to a loss of one full-time equivalent (FTE) person from the workforce, and a sizeable component of this sickness was injury-based time out. We’ve covered efficiency and effectiveness-related worker safety in previous issues (even then the authors show how the field remains thinly observed). Their results will not only astonish and worry our service-manager readers, but they also cry out for action from policy makers. The authors honestly report their limitations and consequently appear to undersell themselves and their research. However, the survey clearly represents the large US Veterans Association health service workforce, and like Cesarotti and Di Silvio, the authors back-up their robust quantitative data. The study’s methods alone are informative, and their qualitative fieldwork helps them to unravel some of the reasons behind their complex study findings.

Two sets of authors independently conducted robust literature reviews of topics not often encountered in the QA literature. Elkhuizen et al., for example, although based in The Netherlands, conducted a systematic and international literature review of healthcare-related business process re-engineering (BPR), notably BPR study methods. Even more interestingly and unusually, their findings are sobering for all publishing staff – from authors to reviewers. Despite robust reviewing processes undertaken by health-care journal staff, Elkhuizen et al. underline systematic flaws (such as those authors who fail to report findings about explicit objectives written earlier in the article) that slip through the reviewing net. The obvious question is: do the authors fall into traps they spot in their systematic review? Reassuringly, they can’t be hoist by their own petard! Indeed, their article is a lesson for all involved in systematic reviewing. A second important question is: has weak reporting damaged the BPR protagonists’ case? Although an anecdotal-based answer, as a researcher heavily involved in patient-focused care (PFC) research and development (R&D) ten years ago, I was surprised how quickly PFC and BPR slipped off the radar. Were weak study methods and reporting to blame? As the authors point out, have best-practice opportunities been missed as a consequence? Elkhuizen et al.’s conclusions and recommendations are clear and it behoves all researchers to use these as a checklist for good research and reporting practice.

The second systematic review in Vol. 19 No. 6 is Shah and Robinson’s investigation into user involvement in medical technology development, design, testing, use and maintenance. Their review reminds us not only about the extensive array of medical equipment used by patients (sometimes without healthcare supervision) but also about other users and stakeholders, and the methods employed to include them in the design, etc., stages. One issue for non-specialist readers that leaps out of the article is how applicable the theoretical and practical issues are to other healthcare R&D.

Finally, Øvretveit and Al Serouri’s case-controlled quality management system (QMS) evaluation in low-income countries generates valuable insights into quality improvement programmes in these settings. Western researchers may take some research structures and processes for granted. They may not realise the challenges faced by researchers in some countries, clearly described by Øvretveit and Al Serouri. At worst, Western health services research may suffer, for example, poor response rates for a variety of reasons. Øvretveit and Al Serouri, on the other hand, describe a QMS programme evaluations in hospitals housing poorly motivated staff, which may prevent QMS R&D projects getting out of the starting blocks. The up-side, on the other hand, is how they show that quality frameworks, notably Maxwell’s (1992) six quality dimensions, help to explain and organise QMS implementation and evaluation. They also show that considerable service development and organisational gains can be made from relatively little investment. Consequently, the article is equally valuable for methodological issues as well as findings.

On an entirely different note, readers may wish to submit articles on “managed care”; for example, Kaiser Permanente, which we plan to publish early next year. We are interested in any work that addresses quality and performance. Manuscripts should be submitted to any IJHCQA editor by the end of this year.

Keith HurstHealth Sciences and Public Health Research Institute, Leeds University, Leeds, UK

References

Hurst, K. (2005), Primary Care Trust Workforce Planning and Development, Wiley, Chichester

Maxwell, R. (1992), “Dimensions of quality revisited”, Quality in Health Care, Vol. 1, pp. 171–7

Related articles