Cognitive systems

Journal of Documentation

ISSN: 0022-0418

Article publication date: 17 October 2008

617

Keywords

Citation

Bade, D. (2008), "Cognitive systems", Journal of Documentation, Vol. 64 No. 6. https://doi.org/10.1108/jd.2008.27864fae.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 2008, Emerald Group Publishing Limited


Cognitive systems

Article Type: Comparative review From: Journal of Documentation, Volume 64, Issue 6

Joint Cognitive Systems: Foundations of Cognitive Systems Engineering

Erik Hollnagel and David D. Woods,CRC ress/Taylor & Francis,Boca Raton, FL,2005,240 pp.,ISBN 9780849328718

Joint Cognitive Systems: Patterns in Cognitive Systems Engineering

David D.Woods and Erik Hollnagel,CRC Press/Taylor & Francis,Boca Raton, FL,2006,232 pp.,ISBN 9780849339332

Keywords: Cognition, Library and information science, Ergonomics

The writings of Hollnagel and Woods do not appear to have had much impact within LIS. Citation is rare: Garfield’s HistCite lists 11 citations for Hollnagel (1980), references to other papers that I have seen are accompanied by little (Nitecki’s (1993) two paragraph discussion of Hollnagel (1978a,b) or no discussion (Fenly (1990) cites Hollnagel (1989); Kuhlthau (1991) cites Hollnagel and Woods (1983)). The only review I could locate was Norton (1996) review of a volume coedited by Hollnagel. Thus, even though these two volumes are already a few years old, a review is in order.

I first discovered the work of Hollnagel and Woods while researching the literature on error a few years ago. After deciding to search the ergonomics literature for theoretical perspectives on error, I came across a paper presented by Hollnagel at the second NATO Conference on Human Error, August 1983 in Bellagio, Italy. The second paragraph began thus:

The bias that I find is the assumption that there exists something called “Human Error” about which meaningful questions can be posed – and answered! “Human Error” thereby gets the status of a concrete phenomenon as, for instance, decision making. It is, however, obvious that “Human Error” does not refer to something observable, in the same sense as decision making does (Hollnagel, 1983).

My first response to reading that paper was to scoff: No such thing as human error? Ha! Human error is the most frequent experience of my life! I understood why the editors of the report of that conference (Senders and Moray, 1991) chose not to discuss Hollnagel’s contribution and why it found no sympathetic audience there. Yet his remarks in that paper continued to gnaw at me and so I began to read his later works, especially Hollnagel’s (1998) monograph Cognitive Reliability and Error Analysis Method. I also read a monograph which bore the mark of Hollnagel’s influence: Behind Human Error (Woods et al., 1994), coathored by Woods, Hollnagel’s collaborator since 1979. By the time I finished writing the book I had been working on, I was describing Hollnagel’s ideas as “a body of work which is perhaps the most interesting of all for the purposes of studying database errors” (Bade, 2004). The two volumes by Hollnagel and Woods reviewed here are some of the latest in a series of publications which from the first (Hollnagel, 1971) to the last (Hollnagel, 2008) are packed with illuminating discussions of theoretical and practical issues of great importance to LIS.

What interests Hollnagel is set forth historically in a remark from the forthcoming paper “The changing nature of risks”:

The crucial change that took place in the 19th century was that accidents became associated with the technological systems that people designed, built, and used as part of work, in the name of progress and civilisation. Suddenly, accidents happened not only because the people involved, today referred to as people at the sharp end, did something wrong or because of an act of nature, but also because a human-made system failed. Furthermore, the failures were no longer simple, such as a scaffolding falling down or a wheel axle breaking. The failures were complex, in the sense that they usually defied the immediate understanding of the people at the sharp end. In short, their knowledge and competence was about how to do their work, and not about how the technology worked or functioned. Before this change happened, people could take reasonable precautions against accidents at work because they understood the tools and artefacts they used sufficiently well. After this change had happened, that was no longer the case (Hollnagel, 2008).

What distinguished Hollnagel’s approach from other participants at the 1983 conference as well as many researchers on error today was his rejection of the “mental models” or information processing approach to cognitive psychology as a useful paradigm for research on error. His criticisms of the information processing approach in cognitive psychology – set out in few brilliant papers published in the 1970s (Hollnagel, 1975, 1978a,b) – were rooted in a systems theoretical approach (Bertalanffy), cybernetics, functionalism and phenomenological psychology and anticipated later theoretical discussions of distributed cognition and cognition-in-the-wild as well as criticisms of them.

Joint Cognitive Systems was written in two volumes to provide both a theoretical and a practical approach to the study of what the authors call Joint Cognitive Systems (JCS). The first volume is the theoretical volume and Hollnagel is the principal author, while the second volume is a volume of practical cases with Woods as the principal author. My remarks will focus on the first (theoretical) volume, with only an occasional remark on the subsequent volume.

In the first volume the authors define a cognitive system as “a system that can modify its behaviour on the basis of experience so as to achieve specific anti-entropic ends.” (Hollnagel and Woods, 2005, p. 22). A JCS is characterized by co-agency rather than interaction (p. 42). Thus, organizations, human-machine systems, machine-machine systems and the management of human-machine and machine-machine systems can all be analyzed as integrated wholes – JCSs – rather than a disjointed collections of separate components. The author’s aim is to observe, model and understand “how joint cognitive systems use artefacts to cope with complexity” (p. 196).

Chapter one discusses the historical circumstances that gave rise to the authors’ development of a theory of Cognitive Systems Engineering (CSE). They mention three driving forces “that have shaped our thinking about humans and machines”:

  1. 1.

    the growing complexity of socio-technical systems;

  2. 2.

    problems and failures created by a clumsy use of the emerging technologies; and

  3. 3.

    the limitations of linear models and the information processing paradigm (pp. 1-2).

The first development is accepted, the second is being addressed from many different angles with varying degrees of success (theory does matter), while the third has been near the center of Hollnagel’s theoretical interest for the past 30 years. Joint Cognitive Systems theory has been developed largely as a response to the limitations of the information processing paradigm and for this reason it has immense theoretical interest for information scientists as well as practical interest for librarians and others working with information technologies.

Some practical issues that appear right away in the first chapter have been immensely important for my own understanding of the use of information technologies in libraries. These are (from Hollnagel and Woods, 2005, pp. 6-7):

  • Systems and issues are coupled rather than independent. If we disregard these couplings in the design and analysis of these systems, we do it at our own peril.

  • The striving for higher efficiency inevitably brings the system closer to the limits for safe performance.

  • Increased dependence on proper system performance. If one system fails, it may have consequences that go far beyond the narrow work environment.

  • The belief that more data or information automatically leads to better decisions is probably one of the most unfortunate mistakes of the information society.

The first issue noted above is taken up again in the second volume in the discussion of Woods’ First Law of Cooperative Systems: “It’s not cooperation, if either you do it all or I do it all.” (Woods and Hollnagel, 2006, p. 117) and is a matter that I have dealt with extensively in a series of recent publications (Bade, 2004, 2007, 2008a, b). The second and third points are relevant to almost all policy decisions being made in libraries today, while the fourth is pertinent to discussions of full-text searching, metadata and making web search engines the primary access mechanism for library holdings. Theoretical aspects of all of these points are dealt with more fully later in the first volume.

Some of the other issues discussed include:

  • The internal representation, the system of categories, is characteristic for each system rather than generic and common to all systems. This means that two systems, or two persons, may have a different “model of the world”, i.e. different ideas of what is important as well as different knowledge and expectations. Specifically, as many have learned to their dismay, system designers and system users may have completely different ideas about how an artefact functions and how it shall be used (p. 15).

  • Cognitive systems apear to have a purpose, and pragmatically it makes sense to describe them in this way. In practice, the purpose of the JCS is often identical to the purpose of the human part of the system, although larger entities – such as organisations – may be seen as having purposes of their own (p. 22).

  • An important premise for CSE is that all work is cognitive (p. 24).

The second chapter, Evolution of Work, includes discussions of amplification and interpretation, of the difference between an embodiment relation between the human and the artefact (tool, machine) and the hermeneutic relation (prosthesis), citing the work of Don Ihde and reminding me of Virilio. This was one of the most interesting and important analyses the authors make, and in fact, the concluding sentence of the chapter states “It is the purpose of CSE to provide a functional solution to amplification, so that the result represents an embodiment rather than a hermeneutic relation” (p. 47).

In an embodiment relation:

[…] the artefact serves as an extension of the body and as an amplifier of the operator’s capabilities […]. The requisite variety of the operator as a control system is therefore supplemented by the variety of the artefact, which means that the joint cognitive system is better able to control the process (p. 45).

Whereas in an hermeneutic relation:

[…] the artefact serves as an interpreter for the operator and effectively takes care of all communication between the operator and the process […] the operator has moved from an experience of the world through the artefact to an experience of the artefact embedding the world […]. The operator’s understanding may accordingly come to depend on the surface presentation of the process and the operator’s experience may be representation-dependent shallow knowledge […]. This suggests the possibility that the more ”advanced” the HMI [human-machine interaction] is and the more sophisticated the computer’s functions are, the more likely we are to depend on it, hence to use the computer as an interpreter rather than as an amplifier (p. 33-4).

Their discussion of the hermeneutic relation should provoke considerable reflection on such practices and projections as relevance ranking, ontologies and the Semantic Web.

The reason for making the distinction between embodiment and hermeneutic relations is that the authors understand that “a well-formulated view of what human-machine systems are” is a prerequisite for the use of technologies, as is “a principled position to what the role of technology should be vis-à-vis the operator” (p. 32). They note that “Since the new conditions for work were predicated on what technology could do rather than on what humans needed, the inevitable result was that the human became the bottleneck of the system” (p. 35) and “Unfortunately, the extensive use of computers created an equally large number of possibilities for making simple tasks unnecessarily complex and for inadvertently introducing task requirements, which were clearly beyond human capacity” (p. 37).

Subsequent chapters discusses the basics of CSE (Chapter 3), dealing with complexity (Chapter 4), the use of artefacts (Chapter 5), JCS (Chapter 6), modelling cognition and a contextual control model (Chapter 7), temporal aspects of control (Chapter 8), and applications of CSE (Chapter 9). Chapters 7 and 9 contain particularly important discussions of matters relevant to crucial aspects of LIS. For instance, Chapter 7 begins with the authors’ critique of classical information processing views of feedback and control:

Present day information processing theories have adopted this tradition in the sense that humans are modelled as responding to the information presented rather than as selecting it. CSE has adopted the more active view according to which humans actively search for and select information. In other words, perception is active and guided rather than passive and responsive (p. 137).

The discussion of user-friendly graphical interfaces in the section “The Forced Automaton Analogy” of Chapter 9 is also provocative:

It may seem a good idea to provide people with a limited number of choices, as menu items or icons, but it effectively forces them to behave like automata […]. By limiting the possibilities the designer really limits the responses, and the user-friendly interface becomes a strait-jacket. The use of the system is ostensibly made easier, but only for anticipated situations. It becomes difficult or impossible to explore how the system works, to develop new ways of working (unless they comply with options that the designer has considered), or in general to deviate from what the system allows. The user-friendly systems thereby actually serve the purposes of the designer rather than the purpose of the operator, and the design is made on the premises of the designer rather than on the premises of the operator (p. 189).

Again in Chapter 9 the discussion of adaptive interfaces is instructive:

Proponents of adaptive interfaces or adaptive interaction often point to the flexibility and efficiency of human-human communication. It is beyond dispute that humans are very capable of adapting the way they communicate to others, and that this – in most cases – is one of the reasons why communication is so effective. But the reason why this is possible is that humans – being models of each other – have the requisite variety built-in, so to speak. Communication is very much a case of being in or establishing control, as pointed out by Shannon and Weaver (1969, org. 1949). And control, as we know by now, requires a good model of the system to be controlled. Since artefacts are not good models of users – and are unlikely to become that for the foreseeable future – it follows that adaptation is doomed to fail as a panacea. This does not preclude that it may work under very specific circumstances, for instance well-understood and narrowly defined work contexts. Attempts to use it outside of such circumstances are nevertheless unlikely to succeed and may, at best, unwittingly impose the forced automaton metaphor (p. 193).

There is much more in the first volume, and the discussions of practical cases in the second volume puts flesh on the bones of theory provided in the first. There is enough in these two volumes alone to reconstruct the foundations of theory in LIS and reorient both research and practice in librarianship. It is my hope that Hollnagel and Woods can begin to do for LIS what they have already accomplished within the field of ergonomics.

1. For more complete lists of publications by Hollnagel, http://erik.hollnagel.googlepages.com/, www.ida.liu.se/~eriho/Publications_O.htm2. Woods writes about his research and publications and archives the posts on his blog, available at: http://csel.eng.ohio-state.edu/blog/wordpress/?cat=11

David BadeUniversity of Chicago, Chicago, Illinois, USA

References

Bade, D. (2004), The Theory and Practice of Bibliographic Failure, or Misinformation in the Information Society, Chuluunbat, Ulaanbaatar

Bade, D. (2007), “Rapid cataloging: three models for addressing timeliness as an issue of quality in library catalogs”, Cataloging & Classification Quarterly, Vol. 45 No. 1, pp. 87–123

Bade, D. (2008a), Responsible Librarianship: Library Policies for Unreliable Systems, Library Juice, Duluth, MN

Bade, D. (2008b), “The social life of metadata: arguments from utility for shared database management (a response to Banush and LeBlanc)”, Journal of Library Metadata

Fenly, C. (1990), “Technical services processes as models for assessing expert system suitability and benefits”, in Lancaster, F.W. and Smith, L.C. (Eds), Artificial Intelligence and Expert Systems: Will They Change the Library? Papers Presented at the 1990 Clinic on Library Applications of Data Processing, Graduate School of Library and Information Science, Urbana, IL, pp. 50–66

Hollnagel, E. (1971), “Informationspsykologi”, Dansk Psykolognyt, Vol. 16, pp. 307–8

Hollnagel, E. (1975), “The relation between meaning, intention and action”, Informatics 5, The Analysis of Meaning, Aslib, London, pp. 135–47

Hollnagel, E. (1978a), “Det kognitive synspunkt: mod en ny rationalisme?”, Nordisk Psykologi, Vol. 30 No. 3, pp. 209–24

Hollnagel, E. (1978b), “The paradigm for understanding in hermeneutics and cognition”, Phenomenological Psychology, Vol. 9 No. 1, pp. 188–217

Hollnagel, E. (1980), “Is information science an anomalous state of knowledge?”, Journal of Information Science, Vol. 2 Nos 3-4, pp. 183–7

Hollnagel, E. (1983), “Human error”, Position paper presented at the NATO Conference on Human Error, Bellagio, Italy, available at: www.ida.liu.se/~eriho/Publications_O.htm

Hollnagel, E. (1989), “The reliability of expert systems: an inquiry of the background”, in Hollnagel, E. (Ed.), The Reliability of Expert Systems, Ellis Horwood, Chichester, pp. 14–36

Hollnagel, E. (1998), Cognitive reliability and error analysis method – CREAM, Elsevier Science, Oxford

Hollnagel, E. (2008), The Changing Nature of Risks, available at http://erik.hollnagel.googlepages.com/Changingnatureofrisks.pdf

Hollnagel, E. and Woods, D.D. (1983), “Cognitive systems engineering: new wine in new bottles”, International Journal of Man-machine Studies, Vol. 18, pp. 583–600

Hollnagel, E. and Woods, D.D. (2005), Joint Cognitive Systems. Foundations of Cognitive Systems Engineering, CRC Press, Boca Raton, FL

Kuhlthau, C.C. (1991), “Inside the search process: information seeking from the user’s perspective”, Journal of the American Society for Information Science and Technology, Vol. 42 No. 5, pp. 361–71

Nitecki, J.Z. (1993), “Metalibrarianship: a model for intellectual foundations of Library Information Science”, ERIC document ED363 346, available at: www.twu.edu/library/nitecki/metalibrarianship/index.html

Norton, M.J. (1996), “Review of: Expertise and technology, cognition and human-computer cooperation”, Journal of the American Society of Information Science and Technology, Vol. 47 No. 9, pp. 722–5 (edited by Hoc, J.M., Cacciabue, P.C. and Hollnagel, E.)

Senders, J.W. and Moray, N. (1991), Human Error: Cause, Prediction, and Reduction, Lawrence Erlbaum Associates, Hillsdale, NJ

Shannon, C.E. and Weaver, W. (1969), The Mathematical Theory of Communication, 4th ed., The University of Illinois Press, Urbana, IL

Woods, D.D. and Hollnagel, E. (2006), Joint Cognitive Systems: Patterns in Cognitive Systems Engineering, Taylor & Francis, Boca Raton, FL

Woods, D.D., Johannesen, L., Cook, R.I. and Sarter, N.B. (1994), “Behind human error: cognitive systems, computers and hindsight”, Crew Systems Ergonomic Information and Analysis Center, Wright Patterson Air Force Base, Dayton OH

Related articles