
Will Keegan is the CTO of Lynx Software Technologies.
Artificial intelligence (AI) is such a hot term and there is a lot of interest in AI, with Recent Gartner Research 48% of business CIOs have already implemented or plan to deploy AI and machine learning technologies this year. However, interest in AI is at odds with AI maturity. For some industries (e.g. customer experience with chatbots), the “cost of being right” is enough to see AI experimentation and implementation. But when organizations manage mission-critical AI applications – where the “cost of a wrong outcome” can lead to loss of life – AI maturity is a must, and accuracy and security are the key differences to achieve safety.
Rushing safety engineering processes, building with new technology that regulators are still grappling with, and generating ROI on an aircraft with a historic 30-year production life cycle is not a model for success. For industries such as the automotive and aerospace industries, consumer confidence that systems are secure is a must for this market to evolve.
My company collaborates on several level 4 autonomy platforms and we see a common design roadblock as organizations build safety nets to reduce individual failure points for critical functions. The preferred method of achieving redundancy is to replicate functions on independent sets of hardware (usually three sets to implement triple-mode redundancy).
Aside from size, weight, power, and budget issues, replicating features on separate hardware components can lead to common-mode failures — where redundant hardware components fail together due to internal design issues. Therefore, security authorities expect redundancy to be implemented with disparate hardware.
The adoption of dynamic architectures is hotly debated in the mission-critical application community. Security systems are usually built around static methods. The purpose of safety system analysis is to examine the behavior of a system to ensure that all behavior is predictable and will work safely for its environment.
Static systems allow easy analysis of system behavior as the functionality and parameters of the system are revealed in advance for human and automated static analysis. The concept of dynamically changing fundamental system properties creates prominent analysis obstacles.
The debate on the adoption of dynamic capabilities centers on the idea that a system can modify its behavior to adapt in flight to unpredictable scenarios. “Limp home mode” is a capability that greatly benefits from taking advantage of a dynamic architecture. This is where a major system failure occurs (eg a bird is caught in a propeller) and other parts of the system intelligently distribute the required functions among available resources for sufficient functionality to protect human life.
AI is needed because without human oversight, computers must decide how to control machines on multiple levels, including mission criticality. The permutations of variables that can affect the state of the system are plentiful; the use of model-driven system control and hazard analysis is essential to achieve level 5 autonomy in a safe manner. However, there are hundreds of nuanced artificial neural networks that all have trade-offs. In three decades, security standards can only support the use of a few programming languages (C, C++, Ada) with sufficient knowledge and provide clear usage guidelines alongside a mature ecosystem of tool suppliers.
It is clear that the wide world of neural networks must be linked, extracted and managed according to the objectives and principles laid down in DO-178C DAL A and ISO26262 ASIL-D. The FAA publication TC-16/4′ “Verification of Adaptive Systems” addresses the challenges very well. However, we still do not have strong guidelines for use and development process standards for artificial neural networks.
The foundation of advanced analysis of automotive safety systems is a massive model that maps passenger relationships with vehicle interfaces and traces vehicle characteristics into functions that result in software distributed on computer components. In the future, these models would become significantly more complex when working with the dynamics of autonomous platforms. The big questions to think about for these models are a) what is sufficient and b) what is correct?
Clearly we need more certification. How can system validation take place for complex systems without those responsible having knowledge of technical complexities such as kernel design and memory controllers, which are critical to enforcing architectural properties? Component-level suppliers are generally not involved in system validation, but rather are asked to develop products in accordance with strict documentation, coding and testing processes and demonstrate evidence.
Reasonable concerns, however, are whether such evidence can meaningfully demonstrate that the intended behavior of components is consistent with the intentions of system integrators.
In the automotive industry, aggressive claims have been made about the timeline for the release of Level 5 autonomous platforms (no driver, no steering wheel, no environmental restrictions). The reality was very different. The airline industry is, quite rightly, more conservative. I like the framework that the European Aviation Safety Agency published last year, which focused on AI applications that “help people”.
Key elements of this involve building a “trust analysis” of the artificial intelligence block based on:
- learning security† Covering the shift from programming to learning as existing development assurance methods are not adapted to cover AL/ML learning processes
- explainability† Provide understandable information about how an AI/ML application achieves its results
- Mitigation of security risks† Since it is not possible to open the ‘AI black box’ as far as necessary, this provides guidance on how to address security risks to address the inherent uncertainty
From this, and from conversations we’ve had with clients, it seems that pragmatism is the word that describes the industry’s approach. Just as lane departure detection is becoming relatively common in new vehicles, we will first see the use of AI in applications where humans remain in control. An example is a vision-based system that aids in in-flight refueling. These important, but peripheral to the most important use cases of system functionality are great places to build trust in the technology.
From here we will see the engineering deployment of AI in increasingly challenging systems with “switch to human control” overrides. Some analysts have indicated that we may never reach the point of fully autonomous vehicles on our streets. l to do however, believe that we will reach the milestone of fully autonomous vehicles in the air. Believing in the “crawl, walk, run” path that the industry is currently in is just the right way to make that happen.