Books
The Law of Stretched Systems
Behind Human Error
Woods, David D.; Dekker, Sidney; Cook, Richard; Johannesen, Leila; Sarter, Nadine. Farnham, Surrey, England and Burlington, Vermont, U.S. Second edition, 2010. 271 pp. Figures, tables, references, index.
Most scientific and academic studies begin with a definition of the subject or problem. It’s only logical for researchers to agree first on precisely what is being studied. But this book’s authors say that defining human error is not only pointless but impossible.
“The search for definitions and taxonomies of error is not the first step on the journey toward safety; it is not even a useful step, only a dead end,” according to the authors.
“Each organization or industry feels that their progress on safety depends on having a firm definition of human error,” they say. “Each group seems to believe that such a definition will enable creation of a scorecard that will allow them to gauge where organizations or industries stand in terms of being safe. But each organization’s search for the definition quickly becomes mired in complexity and terms of reference. Candidate definitions appear too specific for particular areas of operations, or too vague if they are broad enough to cover a wider range of activities.”
It might be possible for medical researchers to define a disease they want to cure, although that is not always the case. But human error belongs to a different class of phenomena, involving fantastically complex interactions of causal factors.
“The definitions [of human error] involve arbitrary and subjective methods of assigning events to categories,” the authors say. They describe three typical, and inconsistent, senses in which the term “error” is applied.
The first sense is described as “the cause of failure,” as in the phrase “the event was due to human error.” It implies that some type of behavior generates a failure, leading to “variations on the myth that safety is protecting the system and stakeholders from erratic, unreliable people.”
A second way of using the expression “error” is simply as a synonym for the failure itself. The authors say, “In this sense, the term ‘error’ simply asserts that the outcome was bad, producing negative consequences.”
Finally, “error” can be viewed as a process, or more often, as not following the right process. “However, the enduring difficulty is that there are different models of what the process is that should be followed: for example, what standard is applicable, how standards should be described and what it means when deviation from the standards does not result in bad outcomes,” they say.
While it might seem that the context alone should clarify which sense of “error” is meant, the authors say that the versions are often confused with one another, and that the same person can slip from one concept to another unconsciously.
Anyway, they argue, none of the senses is adequate.
Seeing error as cause tends to stop the analysis prematurely at an obvious and convenient point, instead of looking also at precursors and less evident factors. “Error-as-cause leaves us with human performance divided in two: acts that are errors and acts that are non-errors,” the authors say. “But this distinction evaporates in the face of any serious look at human performance … .
“Instead of finding error and non-error, when we look deeply into human systems at work, we find that the behaviors there closely match the incentives, opportunities and demands that are present in the workplace. Rather than being a distinct class of behavior, we find the natural laws that influence human systems are always at work, sometimes producing good outcomes and sometimes producing bad ones. Trying to separate error from non-error makes it harder to see these systemic factors.”
The trouble with defining error as consequence, the authors say, is that “this sort of definition is almost a tautology: It simply involves renaming preventable harm as error. But there are a host of assumptions packed into ‘preventable’ and these are almost never made explicit. We are not interested in harm itself but, rather, how harm comes to be. … Closer examination of ‘preventable’ events shows that their preventability is largely a matter of wishing that things were other than they were.”
Error as deviation from correct process “collides with the problem of multiple standards,” the authors say. “Choosing among the many candidates for a standard changes what is seen as an error in fundamental ways. Using finer- or coarser-grained standards can give you a very wide range of error rates. In other words, by varying the standard seen as relevant, one can estimate hugely divergent ‘error’ rates. Some of the ‘standards’ used in specific applications have been changed because too many errors were occurring or to prove that a new program was working.
“This slipperiness in what counts as a deviation can lead to a complete inversion of standardizing on good process: Rather than describing what it is that people need to do to accomplish work successfully, we find ourselves relying on bad outcomes to specify what it is that we want workers not to do. Although often couched in positive language, policies and procedures are often written and revised in just this way after accidents.”
Terminology aside, people do sometimes unintentionally act in ways that lead to bad consequences. If it is futile and even misleading to chase definitions of human error, what can we do?
Various answers to that question form the substance of Behind Human Error, which contains a rich discussion and recommendations. What follows are a few excerpts from “10 of the most important steps distilled from the research base about how complex systems fail and how people contribute to safety.”
Recognize that human error is an attribution. “It is not an objective fact that can be found by anybody with the right method or right way of looking at an incident,” the authors say. “It is … just one way of telling a story about a dreadful event (a first story). … The first story after celebrated accidents tells us nothing about the factors that influence human performance before the fact. Rather, the first story represents how we, with knowledge of the outcome and as stakeholders, react to failures.”
Pursue second stories. “Go beyond the first story to discover what lies behind the term ‘human error,’” the authors say. “When you pursue second stories, the system starts to look very different. You can begin to see how the system moves toward, but is usually blocked from, accidents. Through these deeper insights, learning occurs, and the process of improvement begins.”
Escape from hindsight bias. The authors say, “With knowledge of the outcome, we simplify the dilemmas, complexities and difficulties practitioners face and how they usually cope with these factors to produce success. The distorted view leads people to propose ‘solutions’ that actually can be counterproductive if they degrade the flow of information that supports learning about systemic vulnerabilities and if they create new complexities [that add difficulties to] practice. In contrast, research-based approaches try to use various techniques to escape from hindsight bias.”
Understand the work performed at the sharp end of the system. The “sharp end” — where actions are performed in real-world operations — is the confluence of the many stimuli, demands and pressures of the system. “Improving safety depends on investing in resources that support practitioners in meeting the demands and overcoming the inherent hazards in that setting,” the authors say. “Ironically, understanding the sources of failure begins with understanding how practitioners create safety and success first; how they coordinate activities in ways that help them cope with the different kinds of complexities they experience.”
They emphasize the importance of understanding practices from the point of view of those performing the actions, avoiding the so-called “psychologist’s fallacy,” which happens when “well-intentioned observers think that their distant view of the workplace captures the actual experience of those who perform technical work.”
Search for systemic vulnerabilities. “After elucidating complexities and coping strategies, one can examine how these adaptations are limited, brittle and vulnerable to breakdown under differing circumstances. Discovering these vulnerabilities and making them visible to the organization is crucial if we are to anticipate future failures and institute change to head them off.”
Examine how economic, organizational and technological change will produce new vulnerabilities and paths to failure. Some researchers have found what they call the Law of Stretched Systems: “Every system operates always at its capacity. As soon as there is some improvement, some new technology, we stretch it.”
In other words, technical improvement first goes into enhancing productivity, and only afterward — if at all — into safety. Change “pushes the system back to the edge of the performance envelope,” the authors say.
“Change under resource and performance pressures tends to increase coupling, that is, the interconnectedness between parts and activities. … Increasing the coupling between parts in a process changes how problems manifest, creating or increasing complexities such as more effects at a distance, more and faster cascades of effects, and tighter goal conflicts.” This leads to “new cognitive and collaborative demands which contribute to new forms of failure.”
The authors recommend “focusing your resources on anticipating how economic, organizational and technological change could create new vulnerabilities and paths to failure.”
Tame complexity with new forms of feedback. “A basic pattern in complex systems is a drift toward failure as planned defenses erode in the face of production pressures, and as a result of changes that are not well assessed for their impact on the cognitive work that goes on at the sharp end,” the authors say. “Continuous organizational feedback is needed to support adaptation and learning processes. To achieve this, you should help your organization develop and support mechanisms that create foresight about the constantly changing shape of the risks it faces.”
Reports
Restraining Order
Aviation Child Safety Device Performance Standards Review
DeWeese, Rick; Moorcroft, David; Taylor, Amanda. U.S. Federal Aviation Administration (FAA) Civil Aerospace Medical Institute. DOT/FAA/AM-11/3. February 2011. 18 pp. Tables, figures, references.
Development of U.S. standards for child restraint systems (CRSs) appropriate to transport aircraft seats has been awkward. CRSs based on Federal Motor Vehicle Safety Specification (FMVSS)-213, originally the only means for approval, exhibited “poor performance” in aircraft seats, the report says. The motor vehicle standards were later supplemented by the SAE International Aerospace Standard (AS) 5276/1, Performance Standard for Child Restraint Systems in Transport Category Airplanes and by FAA Technical Standard Order (TSO)-100b.
Later, aircraft passenger seats evolved in ways not envisioned by FMVSS-213 and TSO-C100b.
“The test requirements call for a combination of worst-case belt anchor location, belt tension and seat cushion properties/dimensions that were typical at the time the specifications were written,” the report says. “These parameters no longer appear to be representative of the majority of transport airplane seats. As such, difficulty complying with the standards based on these test parameters may be inadvertently hindering the availability of aviation-specific CRSs.”
Newer aircraft passenger seats meet the more stringent requirements of TSO C-127a, which specifies “16 g [16 times the standard acceleration of gravity]” structural integrity. “With the increased use of TSO C-127a seats, this combination of requirements may not be representative of the majority of current aircraft seats; thus, difficulties in developing aviation child safety devices (ACSDs) that meet these very conservative specifications may be inadvertently hindering the availability of such devices,” the report says. Faced with outmoded requirements, potential suppliers have requested revisions to the standard, and no proposed ACSD has been granted approval under the existing TSO (see “Collective Wisdom”).
In addition, U.S. Federal Aviation Regulations (FARs) Parts 91, 121, 125 and 135 have been revised in light of TSO-C100b to allow use in aircraft of ACSDs that do not have FMVSS-213 approval.
“The specifications in AS5276/1 and TSO-C100b were developed to complement those in FMVSS-213; however, removing the requirement for ACSD to meet FMVSS-213 may have removed some requirements that are useful in ensuring safety,” the report says.
“Revision of the regulatory requirements in order to accommodate these new devices … has inadvertently removed some applicable requirements that are not duplicated in the TSO. Such requirements include: design specifications for occupant support surfaces, belt/buckle strength and durability tests, and defined occupant restraint configuration, geometry and adjustment range. In addition, FMVSS-213 has been revised significantly since TSO-C100b was written, improving several aspects that could benefit existing aviation standards and provide a safety benefit for ACSDs. These include use of advanced test dummies, enhanced test dummy preparation and positioning procedures, improved head injury assessment, and better CRS installation procedures.”
The report concludes that analysis of the various standards, as well as the current seat types in U.S. transport airplanes, “suggests that revisions to both the aerospace standard and the TSO based on technological evolution, improvements to test equipment and test procedures that are more representative of the aircraft environment would advance the development of ACSDs while maintaining or improving child safety.”