An Abstract Interpretation Recipe for Machine Learning Fairness

Abstract

Nowadays, machine learning software is increasing used for assisting or even automating decisions with socio-economic impact. In April 2021, the European Commission proposed a first legal framework on machine learning software – the Artificial Intelligence Act — which imposes strict requirements to minimize the risk of discriminatory outcomes. In this context, methods and tools for formally certifying fairness or detecting bias are extremely valuable.

In this talk, we will be cooking together a static analysis for certifying fairness of neural network classifiers. We will get familiar with the basic recipe for abstract interpretation-based static analyses. We will then gather and put together the best ingredients to achieve an optimal trade-off between precision and scalability.

Date
Event
Location
🇫🇷 CEA-LIST, France

Lyra