âWhat do the customers â the people who are paying for aviation services â want?â said Robert Sumwalt, member, U.S. National Transportation Board (NTSB), speaking at the 56th annual Corporate Aviation Safety Seminar (CASS) at San Diego in April. âDo they want substandard performance, just meeting regulations, cutting corners? Or do you think they want best practices, where youâre talking and implementing quality?
âThe next question is, what are they getting? By definition, if you do not have written standard operating procedures [SOPs] and if you donât insist that people follow them, you are no higher than basic regulatory compliance. To be at best practices, an operator adopts and implements quality, standards, procedures, equipment, and training above and beyond regulatory requirements.â
In his presentation, which included reviewing accidents of special interest to corporate operators since the 2010 CASS, Sumwalt emphasized the connection between best practices and strict adherence to SOPs, citing the example of calling out âfull flapsâ to conform to the flight operations manual, rather than âflaps full.â
He asked the audience to think about questions such as, how do you measure adherence to SOPs? Do you reward the right kinds of behavior?
Sumwalt said, âIâm going to talk about an accident that involved fatigue. It involved a lack of professionalism, a customized checklist, a runway excursion. And it also involved a lack of safety leadership.â
He was referring to the crash of a Hawker 800A at Owatonna, Minnesota, U.S., in July 2008 that killed all eight occupants (See âToo Late to Goâ). The factors Sumwalt cited as relevant to the accident also were examined by several other speakers at the CASS, although not specifically in connection with the Hawker.
Fatigue risk management systems have been studied, proposed and implemented in recent years. Generally, they do not monitor the status of individuals as they report for work. Gordon Dupont, CEO, System Safety Services, described a different kind of fatigue risk management system â one that is said to determine âfitness to workâ of frontline personnel immediately prior to beginning a shift.
The system, called the Fit for Work Indicator, is a safety tool developed in Australia, originally for mine workers. âIt provides a noninvasive tool for a range of personal and other factors that might result in personal impairment and be a workplace risk,â Dupont said. âThis system has contributed to a significant lowering of the incident and injury rate at many sites, such as lowering injury-related lost time by more than 80 percent on some sites, and has been anecdotally credited with encouraging improved attitudes to alcohol moderation, personal health and fitness for work.â
The Fit for Work Indicator measures psychomotor skills involving hand-eye coordination to identify evidence of impairment. âThe system does not rely on a predetermined community or industry standard, but requires each person to establish their own profile after completing a number of tests,â Dupont said. âIt does this by using a computerized terminal to measure a personâs reactions in a simple coordination test, maintaining a moving + sign in the middle of a circle for a specified time while the system analyzes their performance.â
The individualâs previously established mean score is called the personal assessment level (PAL). Each test, which takes less than a minute, provides a comparison with the PAL â it does not measure one employee against others. If the results fall below a threshold, the test generates an alert that tells the individual to report to a supervisor.
It is up to the supervisor and organization to determine why the alert was generated. Dupont recommended that âany person who receives an alert should be required to fill out a questionnaire that asks for possible reasons, such as physical injury, fatigue, stress or alcohol. This should then be used by the supervisor as a basis for discussing the event with the employee and determining possible causes of impairment.â
Professionalism, like character, is hard to define because it is a complex mixture of qualities rather than a single one; but like character, most people recognize it when they encounter it. Roger Cox, senior air safety investigator for the NTSB, talked about âProfessionalism in Aviation: Approaches to Ensuring Excellence in Pilot and Air Traffic Controller Performance.â
Cox drew on the comments expressed by 45 panelists at the 2010 NTSB Professionalism in Aviation Safety Forum (SeeâOut of Boundsâ). Cox referred to professionalism as an âintangible, an internalization of valuesâ beyond being competent or skilled.
âOur panelists told us that the U.S. system of candidates self-selecting and self-financing private flying lessons was not producing the best professional pilots,â Cox said. âThere was a need for better screening and selection. The panel said that airlines faced with a shrinking pilot pool are faced with a hard decision: Either ground flights for lack of enough professional pilots, or alter their selection system. For most operators, more investment in good recruitment screening and selection would be money well spent.â
Cox said that forum participants mentioned selection criteria including technical competence, leadership, operational awareness, teamwork, attitude and how candidates deal with stress. âEmployers can use a variety of tools to select these qualities, including interviewing,â he said. âBut some of our panelists told us that interviewing is a special skill that has to be trained for, and unfortunately, a lot of people who interview pilot candidates have never learned what to look for and how to find out what youâre really trying to find out.â
The forum panelists also pointed out that progress is being made to institutionalize professionalism among many operators. âCompanies are using a variety of methods, including line checks scheduled at random; CRM [crew resource management] leadership classes; line operations safety audits; and an emphasis on clear communication and feedback,â Cox said.
âCaptains need to understand policies and procedures, and companies need to invest time and effort to be clear about why policies and procedures exist.
They call that âbuy-in,â and itâs essential, especially to get younger pilots to buy into the standards we have.â
David Bjellos, president, Daedalus Aviation Services, took up the issues around customized checklists.
âCorporate aviation remains an adolescent with regard to regulatory oversight of checklists,â he said. âNo legal precedent has been set concerning use of a customized checklist. However, some recent accident reports have listed incorrect checklist usage as a contributing factor. The emphasis should be on both content and proper use.â
FARs Part 91 allows an operator to use any checklist it believes is appropriate to their flight operation outside the Part 142 training environment, Bjellos said (See âChecklist Confusionâ).
Referring to a letter he had received from the U.S. Federal Aviation Administration (FAA) in response to his query about the acceptability of customized checklists, a letter included in the seminar proceedings, he said, âThese recommendations are useful but do not answer the fundamental question of âwhat is acceptable?â FAA has no formal opinion as to which checklist they prefer â OEM [original equipment manufacturer] or customized, but has made it clear they have no objection to customized versions.â
The burden of getting a customized checklist approved for use at a Part 142 training center is causing many Part 91 operators to use the OEM checklist for training and a customized checklist in operations, Bjellos said. âSending a single pilot to training (versus a two-person crew) requires a common ground â usually the OEM checklist. Here is the classic conflict: Operators elect not to use their own checklists when in training, yet use them in normal operations.â
He proposed that operators should try to convince OEMs to develop a âStandard Normal Operationsâ checklist, including any approved flow patterns, with an option to customize it for retrofitted equipment.
âIt is leadership that brings an SMS [safety management system] to life,â said Daniel J. Grace, manager, flight operations safety and security, Cessna Aircraft Co. âIt is easy to manage an established SMS, but frankly, not everyone has the desire to jump into the safety world and be accountable for the operation and the decisions of others. The person who has the passion for this work and is open to its challenges will be the best candidate. This individual must be able to create strong relationships that form a bond among other team members, someone who can motivate and energize a team while building rapport with others to move in the desired direction.â
In addition, Grace said, a safety leader must be comfortable dealing with the organizationâs top management. âIt takes confidence in the work and the ability to discuss and explain a program that may be foreign to some. When discussing the SMS with senior leaders, it is important to explain to them why this is a valuable tool in the organization. It is also important to listen carefully to what they say, because senior leaders may provide additional direction for the program. This leadership discussion gets them involved and encourages them to take ownership of the program.â
The Flight of the Black Swan
AeroSafety World spoke with John Gadzinski, president, Four Winds Consulting, following his presentation on âRunway Excursions and Mitigation Strategies.â A former U.S. Navy pilot and flight instructor, he later served as air safety chairman for the Southwest Airlines Pilots Association and then as director of safety for the Coalition of Airline Pilot Associations.
ASW: What is a âblack swanâ event, and what does it have to do with aviation safety?
JG: A âblack swanâ event is a highly random or unexpected event that has a great impact on the environment in which it takes place. One of the problems that we have with aviation safety is that the significant events that affect us happen rarely. When weâre trying to understand safety in terms of a bell curve [graph of a normal distribution], many of the most significant events occur toward the tail ends of that bell curve. They canât be predicted.
A lot of the aviation accidents we see are, by definition, black swans.
ASW: Even if an event is highly unusual and unpredictable, does that mean it is unimaginable?
JG: Imagination is the key. Flight involves a very complex system with interactions on many levels â pilot technique, checklist design, air traffic controllers, weather, and much more. Sometimes we have unexpected interactions that we might not have envisioned before.
The most classic, and tragic, example was the Apollo 1 launch pad fire. [Astronaut] Frank Borman testified at a Senate hearing that the cause of the accident was a failure of imagination. It wasnât that they werenât looking for dangers, they just never conceived that those dangers could occur on an unfueled rocket strapped to the earth going zero miles an hour. Yet, in 20/20 hindsight, the conditions for that accident were plain to see: the design of the hatch, the fact that they had pressurized that vessel with pure oxygen, the flammability of the Velcro.
Given that situation, being able to have a door that you could open quickly from the inside was a mitigation for a black swan that could occur in that capsule. Although you canât necessarily prevent them from happening, you can create conditions so those occurrences donât have catastrophic consequences.
ASW: How do you ask a corporate CEO or chief financial officer to spend a lot of money to preclude a one-in-a-million chance of a disaster?
JG: Itâs a hard sell. They tend to think only in the middle of the bell curve. Thatâs why itâs important to convey an understanding of the inevitability of uncertainty.
ASW: You canât just think about the odds, you have to come to grips with the potential severity of a seemingly improbable event?
JG: Right. And I think that as the view of safety, human factors and safety analysis progresses, the day might come when using this awareness of the effect of the highly improbable will become more standard, helping to mitigate that risk. For instance, not having an effective runway safety area might in the future be considered a careless act.
ASW: Runway excursions and their mitigation was the main subject of your presentation. How do randomness and improbability tie in with landings?
JG: On an aircraft carrier, where I conducted landings and acted as a landing signals officer, there is â for obvious reasons â an acute awareness of the extreme risk involved in deviations from the approved landing criteria. With so little margin for error, the response is to leave very little to chance in carrier landings. Randomness and improbability are reduced about as far as is humanly possible.
In civilian aviation, practical considerations mean that there is far greater randomness in landing lengths, for instance. That can cause, at the tails of the bell curve, drastic variations in performance. And, on occasion, that may combine with conditions conducive to an overrun, such as a flooded runway with the potential for hydroplaning tires or, like in Little Rock [an MD-82 overrun in 1999 with 11 fatalities], an inadvertent lack of ground spoilers.
ASW: What to do?
JG: One mitigation that civil aviation authorities have allowed is that if you have a runway safety area thatâs less than standard â say, instead of a 1,000-ft [305-m] safety area, you have a 300-ft [91-m] area â and then you have a road behind it or some obstacle that could severely damage the airplane, you can take the existing runway and decrease its usable length by something known as a declared distance. Maybe you tell the operator, instead of having 6,000 ft [1,829 m] to land on, Iâm going to allow you 5,600 ft [1,707 m], and Iâll repaint the surface for the new landing area. It isnât actually lengthening the runway, but itâs as if thereâs more paved area to accommodate overruns.
Reducing the usable runway length so thereâs âextraâ pavement wonât stop all overruns, of course. Beyond the runway, you need some type of arresting device. It can be as simple as a grass strip. But if thereâs something especially dangerous beyond the runway end, maybe you ought to consider an EMAS [engineered materials arresting system].
ASW: What else can safety managers do to prepare for unforeseeable events?
JG: The biggest challenge today is to elicit good safety reporting from front-line employees. A lot of the safety reporting systems we have today are geared toward not being punished for noncompliance.
But when I fly, because Iâm a âsafety guy,â I see things every day I could write a report on. Maybe itâs something so simple that everybody takes it for granted: You pull into the ramp area and you canât see the painted ingestion zones for your engines. Or you canât see the lead-in lines because the lighting is bad or the paint is worn. These are precursors for a ground mishap, but itâs what most pilots think of as just a cost of doing business. You have to get these pilots to understand that if thereâs something that makes their life a little more difficult, like a procedure that doesnât harmonize with their operational needs, it has to be reported. And the person who reports it should be rewarded with positive feedback.
â RD