Interpretability-Aware Verification of Machine Learning Software

Abstract

Machine learning (ML) software is increasingly being employed in high stakes or sensitive applications. This poses important safety, privacy, and non-discrimination challenges. As a consequence, research in ML verification rapidly gained popularity and demand for interpretable ML models is more and more pronounced. Interpretability and verification are typically seen as orthogonal concerns. Research in ML interpretability is mainly carried out in the ML community, while research in ML verification is mostly carried out in the formal methods community, without much communication and exchanges between these research areas. In this talk, we advocate for closing this gap by presenting our recent and ongoing work on interpretability-aware verification of ML software.

Date
Location
🇫🇷 Paris, France
Links

Whoopie