“What do the customers — the people who are paying for aviation services — want?” said Robert Sumwalt, member, U.S. National Transportation Board (NTSB), speaking at the 56th annual Corporate Aviation Safety Seminar (CASS) at San Diego in April. “Do they want substandard performance, just meeting regulations, cutting corners? Or do you think they want best practices, where you’re talking and implementing quality?
“The next question is, what are they getting? By definition, if you do not have written standard operating procedures [SOPs] and if you don’t insist that people follow them, you are no higher than basic regulatory compliance. To be at best practices, an operator adopts and implements quality, standards, procedures, equipment, and training above and beyond regulatory requirements.”
In his presentation, which included reviewing accidents of special interest to corporate operators since the 2010 CASS, Sumwalt emphasized the connection between best practices and strict adherence to SOPs, citing the example of calling out “full flaps” to conform to the flight operations manual, rather than “flaps full.”
He asked the audience to think about questions such as, how do you measure adherence to SOPs? Do you reward the right kinds of behavior?
Sumwalt said, “I’m going to talk about an accident that involved fatigue. It involved a lack of professionalism, a customized checklist, a runway excursion. And it also involved a lack of safety leadership.”
He was referring to the crash of a Hawker 800A at Owatonna, Minnesota, U.S., in July 2008 that killed all eight occupants (See “Too Late to Go”). The factors Sumwalt cited as relevant to the accident also were examined by several other speakers at the CASS, although not specifically in connection with the Hawker.
Fatigue risk management systems have been studied, proposed and implemented in recent years. Generally, they do not monitor the status of individuals as they report for work. Gordon Dupont, CEO, System Safety Services, described a different kind of fatigue risk management system — one that is said to determine “fitness to work” of frontline personnel immediately prior to beginning a shift.
The system, called the Fit for Work Indicator, is a safety tool developed in Australia, originally for mine workers. “It provides a noninvasive tool for a range of personal and other factors that might result in personal impairment and be a workplace risk,” Dupont said. “This system has contributed to a significant lowering of the incident and injury rate at many sites, such as lowering injury-related lost time by more than 80 percent on some sites, and has been anecdotally credited with encouraging improved attitudes to alcohol moderation, personal health and fitness for work.”
The Fit for Work Indicator measures psychomotor skills involving hand-eye coordination to identify evidence of impairment. “The system does not rely on a predetermined community or industry standard, but requires each person to establish their own profile after completing a number of tests,” Dupont said. “It does this by using a computerized terminal to measure a person’s reactions in a simple coordination test, maintaining a moving + sign in the middle of a circle for a specified time while the system analyzes their performance.”
The individual’s previously established mean score is called the personal assessment level (PAL). Each test, which takes less than a minute, provides a comparison with the PAL — it does not measure one employee against others. If the results fall below a threshold, the test generates an alert that tells the individual to report to a supervisor.
It is up to the supervisor and organization to determine why the alert was generated. Dupont recommended that “any person who receives an alert should be required to fill out a questionnaire that asks for possible reasons, such as physical injury, fatigue, stress or alcohol. This should then be used by the supervisor as a basis for discussing the event with the employee and determining possible causes of impairment.”
Professionalism, like character, is hard to define because it is a complex mixture of qualities rather than a single one; but like character, most people recognize it when they encounter it. Roger Cox, senior air safety investigator for the NTSB, talked about “Professionalism in Aviation: Approaches to Ensuring Excellence in Pilot and Air Traffic Controller Performance.”
Cox drew on the comments expressed by 45 panelists at the 2010 NTSB Professionalism in Aviation Safety Forum (See“Out of Bounds”). Cox referred to professionalism as an “intangible, an internalization of values” beyond being competent or skilled.
“Our panelists told us that the U.S. system of candidates self-selecting and self-financing private flying lessons was not producing the best professional pilots,” Cox said. “There was a need for better screening and selection. The panel said that airlines faced with a shrinking pilot pool are faced with a hard decision: Either ground flights for lack of enough professional pilots, or alter their selection system. For most operators, more investment in good recruitment screening and selection would be money well spent.”
Cox said that forum participants mentioned selection criteria including technical competence, leadership, operational awareness, teamwork, attitude and how candidates deal with stress. “Employers can use a variety of tools to select these qualities, including interviewing,” he said. “But some of our panelists told us that interviewing is a special skill that has to be trained for, and unfortunately, a lot of people who interview pilot candidates have never learned what to look for and how to find out what you’re really trying to find out.”
The forum panelists also pointed out that progress is being made to institutionalize professionalism among many operators. “Companies are using a variety of methods, including line checks scheduled at random; CRM [crew resource management] leadership classes; line operations safety audits; and an emphasis on clear communication and feedback,” Cox said.
“Captains need to understand policies and procedures, and companies need to invest time and effort to be clear about why policies and procedures exist.
They call that ‘buy-in,’ and it’s essential, especially to get younger pilots to buy into the standards we have.”
David Bjellos, president, Daedalus Aviation Services, took up the issues around customized checklists.
“Corporate aviation remains an adolescent with regard to regulatory oversight of checklists,” he said. “No legal precedent has been set concerning use of a customized checklist. However, some recent accident reports have listed incorrect checklist usage as a contributing factor. The emphasis should be on both content and proper use.”
FARs Part 91 allows an operator to use any checklist it believes is appropriate to their flight operation outside the Part 142 training environment, Bjellos said (See “Checklist Confusion”).
Referring to a letter he had received from the U.S. Federal Aviation Administration (FAA) in response to his query about the acceptability of customized checklists, a letter included in the seminar proceedings, he said, “These recommendations are useful but do not answer the fundamental question of ‘what is acceptable?’ FAA has no formal opinion as to which checklist they prefer — OEM [original equipment manufacturer] or customized, but has made it clear they have no objection to customized versions.”
The burden of getting a customized checklist approved for use at a Part 142 training center is causing many Part 91 operators to use the OEM checklist for training and a customized checklist in operations, Bjellos said. “Sending a single pilot to training (versus a two-person crew) requires a common ground — usually the OEM checklist. Here is the classic conflict: Operators elect not to use their own checklists when in training, yet use them in normal operations.”
He proposed that operators should try to convince OEMs to develop a “Standard Normal Operations” checklist, including any approved flow patterns, with an option to customize it for retrofitted equipment.
“It is leadership that brings an SMS [safety management system] to life,” said Daniel J. Grace, manager, flight operations safety and security, Cessna Aircraft Co. “It is easy to manage an established SMS, but frankly, not everyone has the desire to jump into the safety world and be accountable for the operation and the decisions of others. The person who has the passion for this work and is open to its challenges will be the best candidate. This individual must be able to create strong relationships that form a bond among other team members, someone who can motivate and energize a team while building rapport with others to move in the desired direction.”
In addition, Grace said, a safety leader must be comfortable dealing with the organization’s top management. “It takes confidence in the work and the ability to discuss and explain a program that may be foreign to some. When discussing the SMS with senior leaders, it is important to explain to them why this is a valuable tool in the organization. It is also important to listen carefully to what they say, because senior leaders may provide additional direction for the program. This leadership discussion gets them involved and encourages them to take ownership of the program.”
The Flight of the Black Swan
AeroSafety World spoke with John Gadzinski, president, Four Winds Consulting, following his presentation on “Runway Excursions and Mitigation Strategies.” A former U.S. Navy pilot and flight instructor, he later served as air safety chairman for the Southwest Airlines Pilots Association and then as director of safety for the Coalition of Airline Pilot Associations.
ASW: What is a “black swan” event, and what does it have to do with aviation safety?
JG: A “black swan” event is a highly random or unexpected event that has a great impact on the environment in which it takes place. One of the problems that we have with aviation safety is that the significant events that affect us happen rarely. When we’re trying to understand safety in terms of a bell curve [graph of a normal distribution], many of the most significant events occur toward the tail ends of that bell curve. They can’t be predicted.
A lot of the aviation accidents we see are, by definition, black swans.
ASW: Even if an event is highly unusual and unpredictable, does that mean it is unimaginable?
JG: Imagination is the key. Flight involves a very complex system with interactions on many levels — pilot technique, checklist design, air traffic controllers, weather, and much more. Sometimes we have unexpected interactions that we might not have envisioned before.
The most classic, and tragic, example was the Apollo 1 launch pad fire. [Astronaut] Frank Borman testified at a Senate hearing that the cause of the accident was a failure of imagination. It wasn’t that they weren’t looking for dangers, they just never conceived that those dangers could occur on an unfueled rocket strapped to the earth going zero miles an hour. Yet, in 20/20 hindsight, the conditions for that accident were plain to see: the design of the hatch, the fact that they had pressurized that vessel with pure oxygen, the flammability of the Velcro.
Given that situation, being able to have a door that you could open quickly from the inside was a mitigation for a black swan that could occur in that capsule. Although you can’t necessarily prevent them from happening, you can create conditions so those occurrences don’t have catastrophic consequences.
ASW: How do you ask a corporate CEO or chief financial officer to spend a lot of money to preclude a one-in-a-million chance of a disaster?
JG: It’s a hard sell. They tend to think only in the middle of the bell curve. That’s why it’s important to convey an understanding of the inevitability of uncertainty.
ASW: You can’t just think about the odds, you have to come to grips with the potential severity of a seemingly improbable event?
JG: Right. And I think that as the view of safety, human factors and safety analysis progresses, the day might come when using this awareness of the effect of the highly improbable will become more standard, helping to mitigate that risk. For instance, not having an effective runway safety area might in the future be considered a careless act.
ASW: Runway excursions and their mitigation was the main subject of your presentation. How do randomness and improbability tie in with landings?
JG: On an aircraft carrier, where I conducted landings and acted as a landing signals officer, there is — for obvious reasons — an acute awareness of the extreme risk involved in deviations from the approved landing criteria. With so little margin for error, the response is to leave very little to chance in carrier landings. Randomness and improbability are reduced about as far as is humanly possible.
In civilian aviation, practical considerations mean that there is far greater randomness in landing lengths, for instance. That can cause, at the tails of the bell curve, drastic variations in performance. And, on occasion, that may combine with conditions conducive to an overrun, such as a flooded runway with the potential for hydroplaning tires or, like in Little Rock [an MD-82 overrun in 1999 with 11 fatalities], an inadvertent lack of ground spoilers.
ASW: What to do?
JG: One mitigation that civil aviation authorities have allowed is that if you have a runway safety area that’s less than standard — say, instead of a 1,000-ft [305-m] safety area, you have a 300-ft [91-m] area — and then you have a road behind it or some obstacle that could severely damage the airplane, you can take the existing runway and decrease its usable length by something known as a declared distance. Maybe you tell the operator, instead of having 6,000 ft [1,829 m] to land on, I’m going to allow you 5,600 ft [1,707 m], and I’ll repaint the surface for the new landing area. It isn’t actually lengthening the runway, but it’s as if there’s more paved area to accommodate overruns.
Reducing the usable runway length so there’s “extra” pavement won’t stop all overruns, of course. Beyond the runway, you need some type of arresting device. It can be as simple as a grass strip. But if there’s something especially dangerous beyond the runway end, maybe you ought to consider an EMAS [engineered materials arresting system].
ASW: What else can safety managers do to prepare for unforeseeable events?
JG: The biggest challenge today is to elicit good safety reporting from front-line employees. A lot of the safety reporting systems we have today are geared toward not being punished for noncompliance.
But when I fly, because I’m a “safety guy,” I see things every day I could write a report on. Maybe it’s something so simple that everybody takes it for granted: You pull into the ramp area and you can’t see the painted ingestion zones for your engines. Or you can’t see the lead-in lines because the lighting is bad or the paint is worn. These are precursors for a ground mishap, but it’s what most pilots think of as just a cost of doing business. You have to get these pilots to understand that if there’s something that makes their life a little more difficult, like a procedure that doesn’t harmonize with their operational needs, it has to be reported. And the person who reports it should be rewarded with positive feedback.