In the coming years, humanity will see the adoption of multiple breakthrough innovations in all fields, including aviation, where drones are already delivering packages to some doorsteps and urban air mobility is already taking shape.
This world of innovation can be divided into creators and consumers. Disruptive breakthrough innovations come from fearless creators. The aviation industry is a fearful consumer, and the next unknown that frightens us is artificial intelligence (AI).
What is AI? AI is defined as “the capability of computer systems or algorithms to imitate intelligent human behavior.” Basic intelligence is defined as “the ability to learn or understand or to deal with new or trying situations.” Humans learn from data. We upload data to our brains via our senses. We see and read, hear and listen, taste, smell, and touch and feel. We also learn from our mistakes, by making them ourselves, or by reading or hearing about the mistakes of others. We then apply that data — our memories of the sounds, sights, tastes, smells, and feelings, together with our memories of success and failure — and hope to succeed. But we sometimes err, and hopefully, if we are intelligent, we learn from that, too.
Machines learn by uploading data directly into their memories. We train them by giving them data, filling their memories, and letting them process that data and come up with answers. We further train the machines by telling them if their answers are correct. Then they learn from their success and their mistakes, much like humans learn.
To err is human, according to poet Alexander Pope. Thus, true AI, replicating human intelligence, will err. It can capture much more information than humans can, process big data in seconds, and never forget its past mistakes, so it is bound to err less. Someday, we will feed AI so much information that, because of its quantum processing power, it would seem it will never be wrong. But it will sometimes err, and while AI is still in its infancy, it will do so more frequently than not.
We are used to humans making mistakes. Human factors training, crew resource management training, and threat and error management theories are all built around the understanding that humans make mistakes and are intended to attempt to minimize those mistakes and to mitigate their negative effects. We have two pilots on the flight deck because we want one to compensate for the other when he or she makes a mistake. We have standard operating procedures to help us make fewer mistakes, and technology and automation not only to help us err less but also to avoid having the mistakes that we do make develop into accidents.
We are used to systems failing. We are also used to systems having design flaws. That’s why we have redundancies. That’s why we make systems easy for humans to monitor, and why we train humans in monitoring system performance.
What we are not accustomed to is having systems make mistakes. AI systems may work perfectly, just as their creators envisioned, and still make mistakes. We know many possible reasons for humans to make mistakes. We call them “threats,” and we have built a system for identifying them and managing them. But what causes an AI system to make a mistake? Unlike us, it does not make mistakes when it’s tired or stressed, or overloaded, or when it had a bad day.
How do we trap errors made by AI systems, which are supposed to be able to process data much better and faster than we can? And how do we recover?
We can say that statistically AI will perform better than humans, or that it is better to have some system in place where none exist now. But in aviation, when it comes to safety, we are always trying to use all resources available to achieve the best possible outcome, not just a good enough outcome.
One key element is training. Humans operating or supervising AI systems should be trained on AI and understand what can cause an AI system to make a mistake and how to identify that mistake. It might be easy to spot an image of a man with three arms in an AI creation, but it could be harder to spot an incorrect weather forecast or performance assessment. An AI system operator must understand the weaknesses of the system — including areas in which the system was not trained with sufficient data, or in which prompts can confuse the system and produce errors.
For example, pilots are trained to put less trust in weather forecasts when the barometric pressure is at extreme values because the forecast model is trained by feeding it past data, and, as days with extreme pressure are rare, the system does not have enough past data to produce a reliable forecast.
Another element is making sure that we design AI systems with built-in protections. AI systems used in aviation that can affect safety should provide for effective monitoring of their performance. In the short term, that monitoring will be performed by humans, in the longer term, by other AI systems. A good example would be autoland systems, which are either fail-passive, where humans can take over, or fail-operational, with enough built-in redundancies to self-identify failure and switch to backups. AI systems in aviation need to be either error-passive or error-operational.
Aviation and technology have been dancing together for a long time. Technology has enabled aviation and has made it safer but also has introduced unintended risks. The invention of radar, and then transponders, dramatically reduced the number of midair collisions, and the invention of GPS and enhanced ground-proximity warning systems led to similar reductions in controlled flight into terrain. The invention of advanced automation enabled precise and comfortable flight, reduced separation requirements, and provided many other benefits but also created automation-dependency and an elevated risk for loss of control–in flight (LOC-I). The technology behind lithium batteries has made our lives better, no doubt, but also introduced a risk in transporting them. LOC-I, runway incursions, and runway excursions have been on our “most wanted” lists of safety improvements for over two decades. Will AI eliminate those risks? What new risks will it introduce in return?
AI is probably here to stay. For aviation safety, AI can mean a huge leap forward, but for that to happen, we must find ways to allow the innovative disruption to occur safely.
Image: Digitala World / shutterstock
Shai Gill is the CEO of SGA. Alan Sternberg is the CEO of Beams Safety AI.