Interpretability-Aware Verification of Machine Learning Software

Abstract

Machine learning (ML) software is increasingly being employed in high stakes or sensitive applications. This poses important safety, privacy, and non-discrimination challenges. As a consequence, research in ML verification rapidly gained popularity and demand for interpretable ML models is more and more pronounced. Interpretability and verification are typically seen as orthogonal concerns. Research in ML interpretability is mainly carried out in the ML community, while research in ML verification is mostly carried out in the formal methods community, without much communication and exchanges between these research areas. In this talk, we advocate for closing this gap by presenting our recent and ongoing work on interpretability-aware verification of ML software. On the one hand, we show how verification can aid interpretability by providing a new feature importance measure for support vector machines. On the other hand, we discuss how interpretability can guide verification by proposing a new saliency-guided robustness verification problem for deep neural networks. We conclude with plans and perspectives on future work.

Date
Event
Location
🇫🇷 CEA-LIST, France
Links

Whoopie