The third major evolutionary advancement in aviation technology could be upon us: the widespread adoption of artificial intelligence (AI), which ultimately may make autonomous flight not just possible but ubiquitous.
The European Union Aviation Safety Agency (EASA) in February published its first “Artificial Intelligence Roadmap,” a 30-page white paper that aims “to discuss the implication of AI on the aviation sector and identify high-level objectives to be met and actions to be taken to respond to” questions raised by AI’s inevitable encroachment on aviation.
“EASA AI Roadmap 1.0 is an initial proposal … a starting point intended to serve as a basis for discussion,” EASA Executive Director Patrick Ky said. “It is intended to be a living document, which will be amended once a year and augmented, deepened, improved through discussions and exchanges of views, but also, practical work on AI development in which [EASA] is already engaged.”
EASA noted that the last two major technological evolutions in aviation were the introduction of jet engines in the 1950s, which significantly increased the speed of flight, and the adoption of fly-by-wire avionics in the 1980s, which brought computer technology into the cockpit. As significant as these two game-changing developments were, “AI is probably the most disruptive,” EASA cautioned in its Artificial Intelligence Roadmap.
According to EASA, the four major questions raised by the introduction of AI in aviation are:
- How to establish the public trust in AI-based systems;
- How to integrate the ethical dimension of AI (transparency, non-discrimination, fairness, etc.) in safety certification processes;
- How to prepare the certification of AI systems; and,
- What standards, protocols and methods do we need to develop to ensure that AI will further improve the current level of safety of air transport?
EASA envisions three stages of AI’s rollout in aviation: systems that will assist pilots (2022–2025); human-machine collaboration in flying an aircraft, such as a “virtual” first officer (2025–2030); and autonomous commercial air transport, or, more colloquially, pilotless airliners that fly themselves (2035 and beyond).
EASA broadly defines AI as “any technology that appears to emulate the performance of a human.” Ultimately, the widespread deployment of AI in aviation comes down to a matter of trust, EASA stated.
“A European ethical approach to AI is central to strengthen citizens’ trust in the digital development and aims at building a competitive advantage for European companies,” according to the EASA roadmap. “Only if AI is developed and used in a way that respects widely shared ethical values can it be considered trustworthy. Therefore, there is a need for ethical guidelines that build on the existing regulatory framework. In June 2018, the [European] Commission set up a High-Level Expert Group on Artificial Intelligence (AI HLEG), the general objective of which was to support the implementation of the European strategy on AI. This includes the elaboration of recommendations on future-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges. In April 2019, the AI HLEG proposed the following seven key requirements for trustworthy AI, which were published in its report on Ethics Guidelines on Trustworthy Artificial Intelligence.”
The guidelines fall into seven categories: accountability; technical robustness and safety; oversight; privacy and data governance; non-discrimination and fairness; transparency; and societal and environmental well-being.
“The guidelines developed by AI HLEG are non-binding and as such do not create any new legal obligations,” EASA noted. “The Commission proposes to all stakeholders, including regulators, to test the non-binding practical implementation of these ethical guidelines. The EASA strategy embraces this approach from an aviation perspective, and EASA will participate in the testing and improvement of these guidelines. Finally, the trustworthiness concept will necessarily be a key enabler of the societal acceptance of AI.”
As EASA pointed out, “the most discussed application of ML [or machine learning, a key element of AI] is autonomous flight,” adding: “The drone market has paved the way, and we can see now the emergence of new business models striving for the creation of air taxi systems to respond to the demand for urban air mobility. Autonomous vehicles will inevitably have to rely on systems to enable complex decisions, e.g. to ensure the safe flight and landing or to manage the separation between air vehicles with reduced distances compared to current ATM [air traffic management] practices. This is where AI will come into play: to enable full autonomy, very powerful algorithms will be necessary to cope with the huge amount of data generated by the embedded sensors and by the machine-to-machine communications.”
Beyond what EASA describes as the “holy grail of autonomous flight,” AI/ML is anticipated to open the way to the design of new systems that will change the relationship between pilot and systems.
These include:
- “Reducing the use of human resources for tasks a machine can do, thus allowing [humans] to better concentrate on high added-value tasks, in particular the safety of the flight.
- “Putting humans at the center of complex decision processes, assisted by the machine.
- “Addressing the impact of human performance limitations. AI may assist the crew by advising on routine tasks (e.g., flight profile optimization) or providing enhanced advice on aircraft management issues or flight tactical nature, helping the crew to take decisions in particular in high workload circumstances (e.g., a go-around or diversion). AI may also support the crew by anticipating and preventing some critical situations according to the operational context and the crew health situation.”
Featured image: © graphixmania | VectorStock, modified by Susan Reed
Chart: European Union Aviation Safety Agency