(Hyper)Safety Certification of Neural Network Surrogates for Aircraft Braking Distance Estimation

Abstract

Formal methods provide rigorous guarantees of correctness for both hardware and software systems. Their use is well established in industry, notably to certify safety of critical applications subject to stringent certification processes. With the rising prominence of machine learning, the integration of machine-learned components into critical systems presents novel challenges for formal methods. This talk is a tutorial on the use of abstract interpretation-based static analysis techniques for the certification of (hyper)safety properties of machine learning models. As a running example, we focus on the use case of neural network surrogates for aircraft braking distance estimation. We will present several (combinations of) abstractions, highlighting their strengths and limitations, through the lens of different (hyper)safety properties. We will additionally survey the application of these abstractions towards the additional goal of enhancing machine learning explainability. We will conclude with perspectives on possible future research directions in this rapidly evolving field.

Date
Event
Tech Talk: Formal Verification of AI
Location
🇫🇷 Airbus, France

Libra