A pilot from the pioneering days of aviation, who flew with little more than a compass for flight information, would no doubt be delighted — and overwhelmed — by the array of instruments on today’s flight decks. More displays mean that pilots have more information — and that leads to improved decision making and enhanced flight safety.
However, cognitive scientists warn that providing more and more information has its limitations, increases workload and actually can negatively influence the amount of information pilots can absorb and act upon. This concern may be most important in emergency situations when multiple, simultaneous warning displays activate, overwhelming pilots with information.
This warning has not gone unheeded, as many cockpits have a declutter mode, allowing pilots to greatly reduce the number of instrument displays vying for their attention. When a declutter mode is not available, pilots often simply turn off the instruments they consider unnecessary or distracting.
The first step in avoiding the potential problem of information overload is understanding the balance between information requirements (how much information is needed) and availability (how much information is being presented or is quickly accessible).1 How much information is required is ever-changing and depends on flight task, aircraft type and phase of flight. Information availability also depends on the cockpit instrument panel design — that is, the number and location of instruments, the types of displays and the modes of information presentation.
Equally important is the human at the controls. Pilots use multiple senses — especially sight and hearing — to gather information about their aircraft and its relationship to the outside world (i.e., situational awareness).
Although the first aviation displays were entirely visual in nature, many modern displays have both visual and auditory modalities; this is especially true of displays presenting caution and warning information.
Visual displays primarily present information using intensity (brightness), size and color characteristics. Auditory displays use intensity (loudness) and frequency (tone). Both visual and auditory displays often incorporate a pulsating characteristic, such as a flashing light or a beeping tone.
Displays can be considered as having two functional modes. The first, and most obvious, is to present current status information for various aircraft performance parameters such as airspeed and angle of climb, and to have this information always available. This is especially true for visual displays. An example is an altitude indicator. There is no need for pilots to continuously monitor this parameter, but the information is always there, for example, for timely awareness of deviation from clearances.
In a second functional mode, a display may serve as a caution or warning indicator. In this mode, the display moves from a passive to an active function. Based on certain predetermined criteria, the display alerts pilots that the current status of some flight parameter requires monitoring or immediate action. Communications systems, which are a type of auditory display, fall within this mode, as air traffic control communications regarding altitude changes or the presence of nearby aircraft require acknowledgement and possible action. Stall warning indicators are another example.
While all flight displays should be monitored at appropriate intervals, pilots generally interface most with the displays used during takeoff and landing and during emergency situations. During the en route phase of flight, the use of autopilot is customary, with the interface adapted for monitoring.
During takeoffs and landings, pilots use scanning techniques to systematically and purposefully direct their attention to the important and relevant task-defined displays. During these flight phases, pilots must monitor hundreds of sources of information within the cockpit, as well as attend to additional inputs from outside the aircraft. In general, pilots can select when and where to direct attention during most, if not all, phases of flight.
In an emergency scenario, multiple caution and warning displays generate new visual or auditory stimuli, such as red flashing lights or sirens, which are intended to capture pilots’ attention. In such situations, there is a sudden shift in how pilots interact with the displays, which compete to capture pilots’ urgent and full attention.
In today’s fast-paced world, multi-tasking is considered the norm, and motor, visual and auditory tasks apparently are being attended to simultaneously. However, studies consistently show that overall performance suffers when attention is divided among multiple tasks.2 The ability to absorb visual and auditory information from multiple sources is thought of as being second-nature, but the concept of attention is actually complex.
Attention is formally defined as the mechanism that takes place in the brain to ensure that a preferred sensory input receives immediate cognitive processing over all other inputs.3 This definition presumes some preliminary cognitive processing, with or without attention. Attention may be better understood if it is thought of as a process that ensures continued cognitive processing of a chosen sensory input. You must actively and continuously maintain attention in order to maintain a high level of cognitive processing of a desired input. An obvious implication is that if attention is shifted from one input to another, intentionally or inadvertently, cognitive processing of the first input is greatly reduced, if not terminated altogether.
While a complete understanding of how attention works still eludes cognitive scientists, two basic tenets have been identified: Attention is limited, and attention is selective. However, details beyond these general statements remain in contention.
The principle that attention is limited leads to an appealing, but not fully accepted, idea that humans have a finite pool of attention resources that can be distributed across one or more sensory inputs (divided attention). This portrayal is useful in a first attempt to understand attention, but it fails to point out its complicated nature.
The concept of a finite pool of attention resources implies that tasks may be performed in parallel by dividing these resources. However, there is not total agreement as to whether truly simultaneous parallel attention is occurring when we divide attention between two or more tasks at the same time, or if attention is just rapidly being switched between individual sensory inputs.
Some psychologists believe that there is not one pool of attention resources but several. Even if multiple resource pools exist, some studies have suggested that resources used for visual and auditory stimuli may not be completely separate; this is more apparent when the stimuli are in different locations.4 In the cockpit, this may translate into pilots not being able to effectively attend to both visual and auditory warnings from opposite sides of the cockpit.
- Aviation psychologist Chris Wickens5,6 has suggested a multiple resource theory for attention that says each task has three dimensions that determine how attention is allocated. These dimensions are:
- Which cognitive processing stage does the task involve — for example, perceiving a light or selecting a switch to turn on?
- Does the task involve the verbal or spatial mode of processing — listening to a communication or searching for a specific instrument readout?
- What are the types of input and output involved — auditory or visual inputs; verbal or motor outputs?
Wickens’ theory argues that there may be a separate pool of attention resources for each combination of the three task dimensions and that performance deteriorates when there is a shortage of these different resources.
Regardless of whether multi-tasking occurs simultaneously or with rapid switching, the ability to divide attention seems to depend on a number of factors.7 A primary factor is the difficulty of the attention task. It is obvious that the more difficult a task, the greater concentration (attention) is required. It is possible for a task to be so difficult that divided attention is impossible. Not as obvious is that the attention resources required for a specific task do not remain constant. Training and experience can reduce the requirements.
When a task is first learned, the attention resources required can be so great that the task may be all-encompassing. With practice, the attention demand decreases dramatically, and the task becomes automatic. This reduction in attention demand is called automaticity and is the major goal of training. The mechanisms that allow for automaticity are not clearly understood. One theory suggests that with training, some of the processes involved in an attention task eventually are eliminated.8 Another proposed explanation is that with well-practiced tasks, the increased role of memory reduces the attention demand.9
As might be expected, if a specific task is not performed routinely, or if a long delay occurs in the practice or performance of a task, the task’s attention demand will return to previous higher levels. This inevitable regression is what is behind requirements for maintaining flight proficiency.
Task type is another factor affecting divided attention when performing multiple tasks. For tasks largely related to attending to cockpit instruments, task type can be defined by the information input method and can be categorized as either visual, such as searching for a specific switch or reading a display value, or auditory, such as monitoring communications for your call sign or attending to an auditory display warning. Like attention in general, theories of how two or more different types of tasks compete for attention are complex and not fully understood.
An example of this complexity is in the size of the environment from which competition for attention can arise. Because humans have two ears on opposite sides of the head, we are sensitive to sounds generated anywhere around us. In contrast, the human instantaneous visual field — what can be seen when the head is in a fixed position — is limited to mostly the frontal hemisphere. Foveal vision, which provides fine detail, is further restricted to only about two degrees of the visual field.
In studies that have looked at the problem of simultaneous visual and auditory inputs, such as an indicator light flashing at the same time a tone is emitted, it was found that the tone is frequently not detected. This may imply that attention seems to favor visual input when both the visual input and the source of the auditory input are located within the visual field. However, in natural environments, sounds not collocated with a visual input can be used to draw attention to a visual target or event not in the visual field.
The first generally accepted tenet of attention — that it is limited — very likely leads to the second tenet — that attention is selective. It seems reasonable that if an asset is limited, then the user of the asset would have the ability to determine where it should be used.
Selective attention is the process of choosing what to attend to. This may involve directing attention to a specific object or event, or in a general direction. Though the brain continues to receive information from the entire environment, most of this information is largely ignored. Selective attention enables a person to concentrate on the input of interest while disregarding other inputs from the environment such as engine noise, cabin conversation or changing display readouts.10
Selective attention is called “top-down” processing, which is goal-driven; the individual determines which stimulus receives the selective attention. This also is referred to as executive attention. This does not imply that all attention resources have been focused on the selected single object or the event of interest. The brain continues to use a “bottom-up” approach in which attention is stimulus-driven. This means that there are certain aspects — for example, color, brightness increase or loudness — of an input stimulus that can override a person’s selective (focused) attention.
Sometimes selective attention can go too far, resulting in attention tunneling. Colloquially called tunnel vision, this condition occurs when a pilot fixates on a specific input while becoming oblivious to all other incoming information. All attention resources become dedicated to a single input from one information source. This could be a specific location or specific readout on a display, or it could be some object outside the aircraft. In many cases, stress, workload and fatigue can increase the likelihood of tunnel vision.11
The classic experiment conducted by Fischer, Haines and Price12 revealed that pilots flying with a head-up display (HUD) were less likely to detect unexpected runway incursions than those flying with conventional head down instruments, despite the fact that the HUD allowed direct runway viewing so that the incursion could be seen. The study concluded that pilots were focusing their full attention on the HUD symbology at the expense of all other visual information.
In today’s instrument-rich cockpits, pilots typically encounter more information (sensory inputs) than can be processed at any one time, especially during emergency situations when multiple warnings may be flashing and chiming.13 This can lead to a condition known as sensory overload.
Sensory overload causes an over-demand of cognitive resources. The theory of multiple pools of attention resources explains sensory overload as a supply-and-demand problem that occurs when an individual must perform two or more tasks that require the same resource. Conversely, overload will not occur if the multiple tasks do not make demands on the same resources. Sensory overload has long been known to cause pilot error in simulator studies14 and is believed to be a contributing factor in a number of aviation accidents attributed to pilot error. Situations of sensory overload can cause disorientation, degrade decision-making ability, and delay or even prevent the correct response.15
This raises the question of whether one attention resource dominates the others. A number of studies have indicated that in most tasks, cognitive processing seems to favor visual inputs over auditory inputs.16
Solving the Problem
Reviews of the U.S. National Transportation Safety Board (NTSB) aviation accident database have concluded that nearly half of the reported accidents could be attributed to crew error involving lapses of attention.17,18 Human factors experts continue to study the attention issues involving pilots and instrument displays, and to develop better guidelines for information presentation. Better guidelines, coupled with the flexibility of glass cockpits that are no longer constrained in the type and location of the information they present, may help reduce the potentially disastrous consequences of sensory overload and attention tunneling.
Clarence E. Rash is a research physicist with 35 years of experience in military aviation research and development and the author of more than 200 papers on aviation display, human factors and protection topics. He also teaches a course in sensory, perceptive and cognitive human factors engineering principles through the University of Tennessee Space Institute in Tullahoma, Tennessee, U.S.
- Deveans, T.; Kewley, R. Overcoming Information Overload in the Cockpit. West Point, New York, U.S.: Operations Research Center, Military Academy. 2009.
- Wang, Z.; Tchernev, J. “The Myth of Media Multitasking.” Journal of Communication Volume 62 (3): 493–513. 2012.
- Willingham, D.T. Cognition: The Thinking Animal. Upper Saddle River, New Jersey, U.S.: Pearson Prentice Hall. 2004.
- Driver, J.; Spense, C.J. “Spatial Synergies Between Auditory and Visual Attention.” In C.M.M. Umilta (editor), Attention and Performance. Cambridge, Massachusetts, U.S.: MIT Press. 1994.
- Wickens, C.D. “Processing Resources in Attention, Dual Task Performance, and Workload Assessment.” In R. Parasuraman and R. Davies (editors), Varieties of Attention. New York: Academic Press. 1984.
- Goldstein, E.B. Cognitive Psychology: Connecting Mind, Research, and Everyday Experience. Belmont, California, U.S.: Thompson Wadsworth. 2005.
- Anderson, J.R. Rules of the Mind. Hillsdale, New Jersey, U.S.: Erlbaum. 1993.
- Logan, G.D. “An Instance Theory of Attention and Memory.” Psychological Review Volume 109, 376–400. 2002.
- Lee, S. (editor). Encyclopedia of School Psychology. Thousand Oaks, California, U.S.: Sage Publications, Inc. 2005.
- Baddeley, A.D. “Selective Attention and Performance in Dangerous Environments.” British Journal of Psychology Volume 63, 537–546. 1972.
- Fischer, E.; Haines, R.F.; Price, T.A. Cognitive Issues in Head-Up Displays, NASA Technical Paper 1711. Moffett Field, California, U.S.: NASA Ames Research Center. 1980.
- Johnston. W.A.; Dark, V.J. “Selective Attention.” Annual Review of Psychology Volume 37, 43–75. 1986.
- Drinkwater, B.L. “Performance of Civil Aviation Pilots Under Conditions of Sensory Input Overload.” Aerospace Medicine Volume 38, 164. 1967.
- Harris, D. (editor). Human Factors for Civil Flight Deck Design. Hampshire, U.K.: Ashgate Publishing. 2004.
- Roberts, D. (editor). Signals and Perception. Hampshire, U.K.: The Open University. 2002.
- Dismukes, K.; Young, G.; Sumwalt, C.R. “Cockpit Interruptions and Distractions.” ASRS Directline Issue 10. December 1998.
- Strayer, D.L.; Drews, F.A. “Attention.” In Handbook of Applied Cognition. Druso, F.T. (editor). West Sussex, U.K.: John Wiley and Sons. 2007.