The Netherlands

International Journal of Health Care Quality Assurance

ISSN: 0952-6862

Article publication date: 1 February 2006

159

Keywords

Citation

(2006), "The Netherlands", International Journal of Health Care Quality Assurance, Vol. 19 No. 2. https://doi.org/10.1108/ijhcqa.2006.06219bab.003

Publisher

:

Emerald Group Publishing Limited

Copyright © 2006, Emerald Group Publishing Limited


The Netherlands

The NetherlandsQuality assessment: process or outcome? The use of performance indicators for quality assessment in Dutch health care

The Dutch public health care system is being transformed in various ways. With an increasing focus on efficiency and consumer driven care, health institutions in The Netherlands are forced to critically evaluate their actions and processes. With recent political developments creating a more liberal health care system, the role of the patient is steadily changing to the one of a demanding consumer, taking more and more control of his or her own choice of care, all for the best price available.

In that sense, health care services are forced to negotiate in a complex matrix of insurer, consumer and government, where good and structural quality of care plays a decisive role. But what exactly is quality of care? Who determines the value? Is it possible to create measurable attributes for quality of care?

The deliverance of accountable quality of care requires an operational and functional quality system. This system plays a key role in the evaluation and improvement (Deming’s plan-do-check-act) of process and outcome quality.

To gain an accurate insight in the different aspects of this system, the use of performance indicators can be a useful tool.

The evaluation and improvement of quality of care with the use of performance indicators is a recent development, despite its common use in several business areas. It is a powerful tool in structural improvement, and it gives the consumer a transparent overview of institutional performance, which is something frequently demanded in health care of late. But what is the definition of a performance indicator? What are its strengths and weaknesses?

Justifying the use of performance indicators

A definition of an indicator in health care is given by Colsen and Casparie (1995) who say: “An indicator is a measurable aspect of care that gives an insight in the quality of care”.

A similar definition can be found in the writing of Lawrence (1997).

In the above description, quality of care is seen as a variable that can be measured by proper indicators. On a different level, a dimension of care could also function as a health variable. It is extremely important to understand that the reality of health care services can hardly be fully explained by metric indicators. However, they can give a structural and specific view on a variety of quality aspects.

Clear and effective indicators provide direct insight into a quality segment. A quality system only functions as a tool for organisations to obtain an indirect overview of quality. Therefore, a change in indicator value directly functions as a motive to adjust and improve. The use of key performance indicators is a logical step in the strategic alignment pyramid, developed by Bauer (2004).

After defining a proper vision, strategy and objectives, the next step is to make those objectives operational in the form of performance indicators. Subsequently, those indicators can be used to define and implement action initiatives. According to Bauer’s (2004) theory, this is the only way to achieve a proper alignment between initial objectives and actions. Health care institutions often skip the second-to-last step of formulating and using metric performance indicators. Therefore, there is an increased risk of a mismatch between actions and objectives. According to common performance management theory, the use of indicators forms an essential part in the functioning a performance management system (where a quality system like the EFQM/INK model is integrated). The theory discusses four critical performance areas:

  1. 1.

    Performance measurement. The determination and measuring of the right performance indicators.

  2. 2.

    The establishment of performance standards. This can be done internally or externally (government laws form a basis).

  3. 3.

    Reporting of progress. A structural insight in indicator values is essential in designing proper feedback.

  4. 4.

    The process of quality improvement. Formulating and implementing actions meant to adjust indicator values.

As can be seen, there is a strong link with Deming’s plan-do-check-act theory of continuous improvement. The most important justification is given by Donabedian’s quality assessment theory.

Donabedian’s triad model

We can only get the most complete, credible and useful information by studying structure, process and outcome in conjunction (Donabedian, 1980).

As mentioned before, indicators must generate a solid and integrated overview of a variety of quality dimensions and aspects. A method to properly classify indicators to fulfil this demand is given by Avedis Donabedian. His quality assessment theory contains a relative simple model, particularly applicable in health care: the structure/process/outcome (SPO), or Donabedian’s triad model.

In his theory, he describes three quality elements: structure, process and outcome (the effect of delivered service). The first two elements contain indirect measures that influence the third direct element, outcome. All elements are linked with each other, therefore insight into just one of the three is insufficient to measure and evaluate integral quality.

Outcome indicators seem to give the best view of quality performance but, as research shows, process indicators are much more sensitive and unequivocal in the measurement of changes in quality values.

Donabedian’s partitioning makes it possible to specifically determine causal relationships between the several indicators to report malfunctioning in an early stage. For example, patients who are incorrectly informed about treatment can result in bad client-satisfaction outcomes. These outcomes alone can hardly be sufficient to determine the real cause of the problem. To better understand this assessment theory, it is important to further define the three elements.

Indicators in the area of structure

This area contains all tools and resources that are within reach for the players (personnel and management) in a health care institution. These indicators also reflect the organisational environment in which the core processes take place (e.g. personnel qualifications, formations and institutions).

Indicators in the area of process

Process indicators contain all activities that take place between institutional players and patients (consumers). Within this area (just like the outcome area) there is a distinction between technical and interpersonal processes. The first identifies the clinical improvement of individual health, without increasing risk. The second identifies the social and psychological interaction between health care players and their consumers.

Indicators in the area of outcome

This area contains the effects (outcomes) of the preliminary processes on the health and well being of both employees and consumers (delivered service). To quote Donabedian:

Outcome means a change in a client’s current and future health status that can be attributed to antecedent healthcare.

As in the preceding area, a distinction can be made between technical and interpersonal effects. In practice it is very hard to determine the right outcome indicators to reflect quality of care, mostly because it is difficult to gain a clear and right vision of consumer demands and effects of delivered care. In the area of mental health care, this is a major concern, because the effects of care are mostly attributed to an individual sense of well-being.

A similar way of partitioning is used by several research institutes, such as the Dutch Verwey-Jonker Institute. The institute makes a distinction based on the core health processes, entry/cure/exit and separate outcomes (aftercare). This model reflects large similarities with the business process model of input/throughput/output. In these process models there is usually a clear distinction between output and outcome. The first is the physical deliverance of a product or service and the second reflects the effects of that deliverance.

The described structure/process/outcome model aligns perfectly with the Dutch health care institutions widely used INK model (based on the European Foundation for Quality Management model) to practically support existing quality systems.

On the strategic management level of any health institution, an insight into outcome indicators is preferred, mostly because those indicators give an integrated view of preceding process elements. This way, these indicators give incentives to adjust objectives, policies and action initiatives. On a tactical and operational management level, the first two indicator areas are found most interesting to effectively achieve desired results formulated on the strategic level.

Insights into structure and process indicators on a strategic level are crucial to achieving a much more integral and effective malfunction judgment. This happens mainly because the causes of disappointing outcomes can be monitored much better. This stimulates a faster feedback loop and ensures a better fit of process objectives (formulated by the top management) and actions. This top management responsibility is often stated in recent accreditation standards for Dutch health institutions. As Donabedian concludes, proper quality assessment is impossible by just looking at separate outcome indicators.

Performance indicator determination

The determination of the right indicators is a very delicate and complex business. It is hard to quantify quality and performance aspects and it is difficult to identify the right elements of the apparent quality dimensions. The dimension “effectiveness of care” is an example of this complexity. Although almost all players in the health care arena are highly interested in values of indicators of this dimension, it is one of the vaguest areas to get a grip on. Clinical outcomes are consensus and evidence-based determinable, but psychological and social senses are not easily measured. In the area of mental health care this is even a bigger problem, because of the characteristics of related illnesses. Fortunately, the determination of indicators that cover the process element is much simpler. For example, there is a consensus about the influence of the indicator “waiting time” on the variable quality of care. Similar indicators are not just suitable for creating (national) standards (such as the Dutch Treeknormen), they are also measurable.

The determination of a main set of indicators for hospital care based on consensus on a national level was recently developed for use.

The use of performance indicators

Since 2002, there has been an increased interest in performance indicators in Dutch health care. In mental health care, for example, a study (Werkplaats Benchmarking Grote Steden, 2002) compared several mental health care institutions in a variety of predetermined performance indicators. The data can be used for institutions to mirror themselves with the best practices in the field. This is also something the Department of Health of The Netherlands frequently stresses. Dutch hospitals are already using several clinical indicators on the specific level of illnesses. Those indicators will soon be used for comparison by the hospital consumer.

As previously stated, one of the biggest issues that arises with the measurement and use of indicators is that they are hard to quantify. It is of great importance to use quantified indicators as much as possible to have their optimal use. This way it is possible to spot changes (trend spotting) and to use the values in comparison or mirror studies. The ability to spot change is called the “indicator responsiveness.” At the same time, it is necessary to understand that qualitative indicators are just as valuable for an integrated evaluation. It is all about a proper and balanced mix of the two.

Performance indicators can be used for the following purposes:

  • For benchmarking – by consequent measuring and comparing values of indicators, an institution is able to check if it aligns with preset objectives and standards. The results of such studies can be used as an incentive to improve or redesign specific elements or processes.

  • For steering (shift from intuitive to systematic) – indicators form an essential element of a modern performance management system. One can steer (loop back) on four different domains: professional, organisational, satisfaction and financial results.

  • As a means for external accountability – the application of indicators makes it possible to construct a solid base to prove standards, objectives and targets are met. This is often called evidence-based management or management by fact.

In short, in ratifying management actions by indicator values, institutions can clearly show the level of performance and quality on specific areas.

Clearly, the use of indicators is a positive development for both health care institution and the consumer. Although there is a long way to go to the actual and complete use of indicators, there is a solid base and consciousness about the relevance of indicators. The benefits and the use and the forthcoming outcomes will eventually improve the quality of care. By an effective and conscious use of indicators as part of the current quality systems already available, an institution will be ready for the future free-market developments of the Dutch public health care system.

For more information: http://timpostema.nl

References

Bauer, K. (2004), “The power of metrics: KPIs – the metrics that drive performance management”, DM Review, January

Colsen, P.J.A. and Casparie, A.F.(1995), “Indicatorregistratie: een model ten behoeve van integrale kwaliteitszorg in een ziekenhuis”, Medisch Contact

Lawrence, M. (1997), “Indicators of quality healthcare”, European Journal of General Practitioners, Vol. 3, pp. 103–8

Werkplaats Benchmarking Grote Steden (2002), “Benchmarkmodel GGZ”

Related articles