The first two articles in this series (ASW, 2/13 and ASW, 4/13) described our attempt to begin to understand the psychology of why 97 percent of the time, when flying an unstable approach (UA), pilots do not call a go-around (GA) as a preventative mitigation against approach and landing accidents.1
Part of Flight Safety Foundation’s 2011 “Go-Around Decision Making and Execution Project” effort, the first article presented a new description of the various facets of this decision making psychology in terms of what we have called the Presage “Dynamic Situational Awareness Model (DSAM).” The second article described a “situated recall” survey experiment in which we asked more than 2,300 pilots worldwide to describe in detail their experiences (thoughts, feelings, actions) in the moments leading up to their last decision between continuing to fly a UA or calling a GA maneuver. A comparison of the psychology preceding decision making in these two scenarios provided a look at what factors may be implicated in causing a near-complete avoidance in the industry to call a GA.
The results showed that the differences were stark: Pilots continuing to fly UAs perceived far less risk than pilots deciding to go around, and this likely is the immediate “cause” that explains why they failed to go around. But backing up from these risk assessments to psychological and psychosocial factors, we saw that pilots failing to fly GAs reported that they had significantly degraded representations and awareness of the situation in terms of all nine of the DSAM dimensions. So, above and beyond the objective aircraft and environmental factors, pilots’ accounts of their experiences revealed that many surprising aspects of the situation were the main drivers of their judgments about the manageability of the risk, including social and organizational norms and expectations.
In this article, we will continue to report the results of this study, describing judgments that the pilots provided us in hindsight to explain what they thought the main drivers of their decisions had been. We will examine whether pilots were successful in landing safely (as they view the events in hindsight), whether there had been any company response — approval or repercussion — and whether they experienced any regret after making the choices they did. And we will also report the results of a second experiment we conducted within the survey in which we asked pilots to describe their personal thresholds for UA risk , to compare how closely pilots’ personal thresholds align with industry standards. Finally, we will summarize all of the findings in a preliminary list of recommendations about how to combat the psychology of non-compliance with GA policies and procedures given all that we have discovered in our research to date.
Scenario Analysis Versus Hindsight Judgments
We know from our event recall results that “dimmed” situational awareness exists with those pilots choosing to continue to fly a UA. But if asked, in hindsight, about the reasons for their decision, what would these pilots say?
We posed a list of 16 possible influences to pilots in both scenario groups and asked them to report the degree of influence each had on their decision. The results (Table 1) reveal that pilots in both groups named nine causes in common in each of their average “top 10 influencers.” The rank order of factors between the groups is revealing. The top four reasons why UA pilots stated they continued the approach were all associated with their ability as a crew to compensate. These included their experience and the presence of a high-functioning crew. They also admitted to the moderate influence of peer pressure to land and a personal resistance to managing the demands of a GA.
GA pilots, on the other hand, said the greatest influences on their decision were the aircraft instabilities they were dealing with per se, experience (judged a strong but lesser factor by UA pilots), and the weather and aircraft configuration (both seen as stronger influences on their decision than by UA pilots).
While informative, these patterns of reported influences may indicate a post hoc rationalization of pilots’ respective decisions, and escaping this problem is one of the reasons why the guided event recall procedure was developed: to try to place pilots back into the situations they had experienced and have them re-live their thoughts, feelings and perceptions without the filter of conscious interpretation or justification. However, when we examined those recall data and compared them with pilots’ after-the-fact stated reasons accounting for their decisions, we saw little evidence of rationalization. Instead, we see a very close alignment between how pilots lived their respective approaches and decisions, either initiating a GA in “bright” situational awareness or continuing a UA in a “dimmed” DSAM condition, and how, in hindsight, they reported having made those decisions.
In accordance with our DSAM model, with relatively high situational awareness comes the following: a pilot’s expert ability to see the threat (anticipatory awareness) such as aircraft instability, configuration and weather well in advance of intercepting it; naturally defaulting to his or her experience (critical awareness) as a means to validate the perceived threat(s); accurately assessing the crew dynamics (relational awareness) for confidence and support; and finally, expertly adjusting (compensatory awareness) for external threats, such as fatigue and pressures to land.
GA pilots had higher awareness of the situation, and, when their attention was drawn to aircraft and instability factors, they became salient in attention and memory as the likely top influences on their decision. Meanwhile, because their situational awareness competencies were dimmed, UA pilots naturally experienced the following: not seeing certain threats (anticipatory awareness) such as aircraft instabilities, weather and aircraft configuration; selectively leveraging their “stick and rudder” experiences (critical awareness) as permission to continue; and finally, perceiving or assuming crew dynamics (relational awareness) to support non-compliant behavior. In the end, these respective profiles of what pilots reported had shaped their decision making adds further explanatory power to the negative effects of dimmed situational awareness.
Perceptions of Flight Outcomes
We also asked pilots to report on the success and other outcomes of their UA or GA episodes. GA pilots reported that their maneuvers were well coordinated and executed. UA pilots agreed, with the exception that they reported long landings. However, when asked whether they had made the right decision under the circumstances, UA pilots doubted themselves in retrospect: They were less likely to say that they had done the right thing, and more likely to say they had engaged in needless risk and should have called a GA. More than twice as many pilots flying UAs also reported having changed their views of flying UAs and GAs as a result of experiencing the event they described.
There also is evidence here for the “normalization of deviance” to help explain pilot noncompliance with GA policies. Pilots flying UAs reported that their companies had responded with neither clear, consistent criticism nor support for their decisions. When companies fail to manage noncompliant behavior in this area, this lack of feedback implicitly allows such behaviors to flourish, as it sends a signal that such risk taking is “the new normal.” In our last article, we saw the manifestations of this implicit approval: In the moments leading up to their decisions, pilots reporting a UA experience said they were in less agreement with their companies’ GA policies and procedures, and more tolerant of deviations from them.
Pilots’ Personal Thresholds
To more fully understand why pilots do not call GAs, we conducted a small experiment within the survey in which 1,754 (79 percent) of our pilots took part. This experiment was designed to uncover the environmental and physical instability parameters that have the most influence on pilots’ perceptions of the risks inherent in flying UAs, and to examine when their attention to these parameters affects their judgments about calling GAs.
Pilots were presented with a hypothetical flight scenario in which they were randomly assigned to different experimental conditions in which they received variations in the severity of the risk associated with wind conditions, runway conditions/braking action and runway length, on a visual meteorological conditions approach. They were then asked at what degree of aircraft instability they would call a GA. Pilots were instructed to report on the instability thresholds for calling a GA based on their own personal risk criteria. This allowed us to infer where on the flight path different risk factors become personally salient and important as drivers of pilots’ judgments. The overall objective was to determine whether there was basic alignment between pilots’ perceptions about when there is a need to call a GA and general industry policies about when these instabilities necessitate such a decision. Our goal was to then use these data to guide realistic recommendations about changes to policy that might bring them into alignment, without compromising safety. To the extent that pilots do not see current policies as constituting a set of legitimately unsafe conditions, they are likely to ignore such standard operating procedures (SOPs) and engage in potentially riskier, noncompliant behaviors. While the experiment’s design and many findings are complex, the overall results are fairly clear and are depicted simply in Figure 1.
Across the entire experiment, a large percentage of pilots had personal judgments about the threshold at which they would call a GA that were less conservative than industry standards. In other words, these pilots told us that they would not judge that a GA was warranted within the industry’s limits on these instability parameters. This was especially true at 1,000 ft for all of the five instability measures examined, but even at 500 ft for some of them. Among the five, pilots perceived the least necessity to call a GA when their airspeed exceeded VREF by 10 kt: At 1,000 ft above ground level (AGL), more than 80 percent of pilots did not yet a see a need to call a GA, and even at 50 ft AGL, nearly one in five pilots said their personal thresholds for safety had not been breached. At 500 ft, we can see that personal exceedances beyond published limits were present for more than 50 percent of the pilots reporting.
The story was somewhat different for pilots’ thresholds for vertical and horizontal flight path deviation, however. At 1,000 ft, between 30 percent and 40 percent of pilots felt that deviations in excess of 1 dot were not yet beyond their personal thresholds for manageability. But between 1,000 ft and 500 ft, their thresholds came quickly into alignment with industry limits, for by 500 ft, only 10 percent of pilots said that their personal envelope was less conservative than the industry’s published standards.
To the extent that these personal thresholds for risk differ from published limits, we can expect pilots to psychologically downplay industry compliance standards and procedures surrounding an unstabilized approach. The challenge in amending industry policies to better honor pilots’ judgment and experience in managing such unstable aircraft states, and inspire better overall compliance with go-around policies, is to do so without inadvertently lowering overall safety.
Inadequate Situational Awareness
The results of the pilots’ reflection on their experience of flying a UA provides even further empirical evidence to support the idea that pilots who continued a UA did not have adequate situational awareness to accurately assess the risk. Arguably, the most salient finding in support of this assertion is the UA pilots’ experiences of post-decisional regret. Regret is every pilot’s moral compass, and in this case, it points in the direction of “dimmed situational awareness,” pointing out what was a perhaps willful and unacceptable decision. Moreover, UA pilots clearly used their unstable approach experience as a teachable moment, inasmuch as they reported that they had changed their views somewhat of both UA and GA policies.
The results of the experiment showed pilots are, in general, comfortable with lowering the GA thresholds under certain in-flight conditions. The question now becomes to what extent the former is driven by the normalization of deviance, and to what degree flight department management and regulators play a role in this.
As the FSF Go-Around Decision Making and Execution Project is ongoing, the following recommendations are preliminary and based only upon the results of this portion of Phase 1 work. We offer the following recommendations with three essential strategies (S1 to S3) in mind.
S1 Enhance situational awareness (psychosocial awareness) through policy and procedural enhancements and communication improvements, to heighten flight crews’ situational awareness throughout the approach — through the stabilized approach height and beyond — until landing.
S2 Optimize the stable approach definition and height to maximize its relevance to flight crews and its manageability by flight managers/supervisors.
S3 Minimize the subjectivity of UA versus GA decision making for the decision maker (e.g., pilot flying, captain as per company policy) to mitigate specific components of situational awareness that directly compromise the pilot’s risk assessment and decision making ability, so that he or she will be able to more accurately assess operational risk and remain compliant.
It should be stressed that the above strategies cannot be addressed in isolation. Doing so could increase the relative risk level of an unstable approach. For example, lowering a stable approach decision altitude (S2) without increasing the flight crew’s situational awareness (S1), expertise and vigilance may actually increase risk. Moreover, particular attention will be paid to the types of communication recommended (passive, active, progressive, informative and instructive). Strategic placement of mandatory communication of the right type throughout the approach is important in achieving consistent and reliable compliance. Similarly, strategically placed SOPs designed to have crews discuss and identify instability factors prior to and during the approach will naturally enrich the flight crew’s relational, anticipatory and compensatory awareness.
Table 2 offers preliminary recommendations, links them to our strategic intents and lists the psychosocial DSAM constructs they explicitly address.
This research and analysis set out to help determine if there exists, from a psychological point of view, an answer to the question “Why are GA decisions that policy states should be made, actually not being made during so many unstable approaches?” and to then make preliminary recommendations based on the findings.
The results to date demonstrate there are clear differences in situational awareness, crew interaction, risk assessment and decision making between flight crews who elect to continue with a UA versus those who opt to go around. These psychological differences in the moments leading up to the point where a GA decision might be made are robust and variegated, and imply a series of targeted mitigations that can be instituted to better ensure GA decision making compliance.
The Presage Group specializes in real-time predictive analytics with corrective actions to eliminate the behavioral threats of employees in aviation and other industries. Readers interested in greater detail concerning the experimental and survey methodologies and analyses used in this study are referred to a more comprehensive report — “Why are go-around policies ineffective? The psychology of decision making during unstable approach” — available on our website, www.presagegroup.com.
- Burin, James M. “Year in Review.” In Proceedings of the Flight Safety Foundation International Air Safety Seminar. November 2011.