Portrait de Ulrich Aivodji

Ulrich Aivodji

Membre académique associé
Professeur associé, École de technologie supérieure (ETS), Département de génie logiciel et des TI
Sujets de recherche
Apprentissage de représentations
Apprentissage profond
Exploration des données
Optimisation

Biographie

Ulrich Aivodji est professeur associé d'informatique au Département de génie logiciel et des technologies de l'information de l’École de technologie supérieure de Montréal (ÉTS).

Il dirige le Trustworthy Information Systems Lab (TISL). Ses domaines de recherche sont la sécurité informatique, la confidentialité des données, l'optimisation et l'apprentissage automatique. Ses travaux actuels portent sur plusieurs aspects de l'apprentissage automatique digne de confiance, tels que l'équité, l'apprentissage automatique préservant la vie privée et l'explicabilité.

Avant d'occuper son poste actuel, il était chercheur postdoctoral à l'Université du Québec à Montréal (UQAM), où il travaillait avec Sébastien Gambs sur l'éthique de l'apprentissage automatique et la protection de la vie privée. Il a obtenu un doctorat en informatique à l'Université Paul-Sabatier, sous la direction de Marie-José Huguet et Marc-Olivier Killijian. Pendant son doctorat, il a été affilié au Laboratoire de recherche spécialisé dans l’analyse et l’architecture des systèmes du Centre national de la recherche scientifique (LAAS-CNRS) en tant que membre des groupes de recherche Informatique fiable et tolérance aux fautes et Recherche opérationnelle, optimisation combinatoire et contraintes.

Étudiants actuels

Doctorat - École de technologie suprérieure
Maîtrise recherche - École de technologie suprérieure
Postdoctorat - École de technologie suprérieure
Maîtrise recherche - École de technologie suprérieure
Maîtrise recherche - École de technologie suprérieure
Co-superviseur⋅e :
Doctorat - École de technologie suprérieure
Co-superviseur⋅e :
Doctorat - École de technologie suprérieure

Publications

Learning Hybrid Interpretable Models: Theory, Taxonomy, and Methods
Julien Ferry
gabriel laberge
A hybrid model involves the cooperation of an interpretable model and a complex black box. At inference, any input of the hybrid model is as… (voir plus)signed to either its interpretable or complex component based on a gating mechanism. The advantages of such models over classical ones are two-fold: 1) They grant users precise control over the level of transparency of the system and 2) They can potentially perform better than a standalone black box since redirecting some of the inputs to an interpretable model implicitly acts as regularization. Still, despite their high potential, hybrid models remain under-studied in the interpretability/explainability literature. In this paper, we remedy this fact by presenting a thorough investigation of such models from three perspectives: Theory, Taxonomy, and Methods. First, we explore the theory behind the generalization of hybrid models from the Probably-Approximately-Correct (PAC) perspective. A consequence of our PAC guarantee is the existence of a sweet spot for the optimal transparency of the system. When such a sweet spot is attained, a hybrid model can potentially perform better than a standalone black box. Secondly, we provide a general taxonomy for the different ways of training hybrid models: the Post-Black-Box and Pre-Black-Box paradigms. These approaches differ in the order in which the interpretable and complex components are trained. We show where the state-of-the-art hybrid models Hybrid-Rule-Set and Companion-Rule-List fall in this taxonomy. Thirdly, we implement the two paradigms in a single method: HybridCORELS, which extends the CORELS algorithm to hybrid modeling. By leveraging CORELS, HybridCORELS provides a certificate of optimality of its interpretable component and precise control over transparency. We finally show empirically that HybridCORELS is competitive with existing hybrid models, and performs just as well as a standalone black box (or even better) while being partly transparent.
A Survey on Fairness Without Demographics
Patrik Joslin Kenfack
Éts Montréal
The issue of bias in Machine Learning (ML) models is a significant challenge for the machine learning community. Real-world biases can be em… (voir plus)bedded in the data used to train models, and prior studies have shown that ML models can learn and even amplify these biases. This can result in unfair treatment of individuals based on their inherent characteristics or sensitive attributes such as gender, race, or age. Ensuring fairness is crucial with the increasing use of ML models in high-stakes scenarios and has gained significant attention from researchers in recent years. However, the challenge of ensuring fairness becomes much greater when the assumption of full access to sensitive attributes does not hold. The settings where the hypothesis does not hold include cases where (1) only limited or noisy demographic information is available or (2) demographic information is entirely unobserved due to privacy restrictions. This survey reviews recent research efforts to enforce fairness when sensitive attributes are missing. We propose a taxonomy of existing works and, more importantly, highlight current challenges and future research directions to stimulate research in ML fairness in the setting of missing sensitive attributes.
Probabilistic Dataset Reconstruction from Interpretable Models
Julien Ferry
Sébastien Gambs
Marie-José Huguet
Mohamed Siala
Interpretability is often pointed out as a key requirement for trustworthy machine learning. However, learning and releasing models that are… (voir plus) inherently interpretable leaks information regarding the underlying training data. As such disclosure may directly conflict with privacy, a precise quantification of the privacy impact of such breach is a fundamental problem. For instance, previous work have shown that the structure of a decision tree can be leveraged to build a probabilistic reconstruction of its training dataset, with the uncertainty of the reconstruction being a relevant metric for the information leak. In this paper, we propose of a novel framework generalizing these probabilistic reconstructions in the sense that it can handle other forms of interpretable models and more generic types of knowledge. In addition, we demonstrate that under realistic assumptions regarding the interpretable models' structure, the uncertainty of the reconstruction can be computed efficiently. Finally, we illustrate the applicability of our approach on both decision trees and rule lists, by comparing the theoretical information leak associated to either exact or heuristic learning algorithms. Our results suggest that optimal interpretable models are often more compact and leak less information regarding their training data than greedily-built ones, for a given accuracy level.
Fairness Under Demographic Scarce Regime
Patrik Joslin Kenfack
Most existing works on fairness assume the model has full access to demographic information. However, there exist scenarios where demographi… (voir plus)c information is partially available because a record was not maintained throughout data collection or due to privacy reasons. This setting is known as demographic scarce regime. Prior research have shown that training an attribute classifier to replace the missing sensitive attributes (proxy) can still improve fairness. However, the use of proxy-sensitive attributes worsens fairness-accuracy trade-offs compared to true sensitive attributes. To address this limitation, we propose a framework to build attribute classifiers that achieve better fairness-accuracy trade-offs. Our method introduces uncertainty awareness in the attribute classifier and enforces fairness on samples with demographic information inferred with the lowest uncertainty. We show empirically that enforcing fairness constraints on samples with uncertain sensitive attributes is detrimental to fairness and accuracy. Our experiments on two datasets showed that the proposed framework yields models with significantly better fairness-accuracy trade-offs compared to classic attribute classifiers. Surprisingly, our framework outperforms models trained with constraints on the true sensitive attributes.
Fooling SHAP with Stealthily Biased Sampling
gabriel laberge
Satoshi Hara
Mario Marchand
SHAP explanations aim at identifying which features contribute the most to the difference in model prediction at a specific input versus a … (voir plus)background distribution. Recent studies have shown that they can be manipulated by malicious adversaries to produce arbitrary desired explanations. However, existing attacks focus solely on altering the black-box model itself. In this paper, we propose a complementary family of attacks that leave the model intact and manipulate SHAP explanations using stealthily biased sampling of the data points used to approximate expectations w.r.t the background distribution. In the context of fairness audit, we show that our attack can reduce the importance of a sensitive feature when explaining the difference in outcomes between groups, while remaining undetected. These results highlight the manipulability of SHAP explanations and encourage auditors to treat post-hoc explanations with skepticism.
Leveraging Integer Linear Programming to Learn Optimal Fair Rule Lists
Julien Ferry
Sébastien Gambs
Marie-José Huguet
Mohamed
Siala
Washing The Unwashable : On The (Im)possibility of Fairwashing Detection
A. Shamsabadi
Mohammad Yaghini
Natalie Dullerud
Sierra Calanda Wyllie
Aisha Alaagib
Sébastien Gambs
Nicolas Papernot
Washing The Unwashable : On The (Im)possibility of Fairwashing Detection
Ali Shahin Shamsabadi
Mohammad Yaghini
Natalie Dullerud
Sierra Wyllie
Aisha Alaagib Alryeh Mkean
Sébastien Gambs
Nicolas Papernot
The use of black-box models (e.g., deep neural networks) in high-stakes decision-making systems, whose internal logic is complex, raises the… (voir plus) need for providing explanations about their decisions. Model explanation techniques mitigate this problem by generating an interpretable and high-fidelity surrogate model (e.g., a logistic regressor or decision tree) to explain the logic of black-box models. In this work, we investigate the issue of fairwashing, in which model explanation techniques are manipulated to rationalize decisions taken by an unfair black-box model using deceptive surrogate models. More precisely, we theoretically characterize and analyze fairwashing, proving that this phenomenon is difficult to avoid due to an irreducible factor---the unfairness of the black-box model. Based on the theory developed, we propose a novel technique, called FRAUD-Detect (FaiRness AUDit Detection), to detect fairwashed models by measuring a divergence over subpopulation-wise fidelity measures of the interpretable model. We empirically demonstrate that this divergence is significantly larger in purposefully fairwashed interpretable models than in honest ones. Furthermore, we show that our detector is robust to an informed adversary trying to bypass our detector. The code implementing FRAUD-Detect is available at https://github.com/cleverhans-lab/FRAUD-Detect.
Local Data Debiasing for Fairness Based on Generative Adversarial Training
François Bidet
Sébastien Gambs
Rosin Claude Ngueveu
Alain Tapp
The widespread use of automated decision processes in many areas of our society raises serious ethical issues with respect to the fairness o… (voir plus)f the process and the possible resulting discrimination. To solve this issue, we propose a novel adversarial training approach called GANSan for learning a sanitizer whose objective is to prevent the possibility of any discrimination (i.e., direct and indirect) based on a sensitive attribute by removing the attribute itself as well as the existing correlations with the remaining attributes. Our method GANSan is partially inspired by the powerful framework of generative adversarial networks (in particular Cycle-GANs), which offers a flexible way to learn a distribution empirically or to translate between two different distributions. In contrast to prior work, one of the strengths of our approach is that the sanitization is performed in the same space as the original data by only modifying the other attributes as little as possible, thus preserving the interpretability of the sanitized data. Consequently, once the sanitizer is trained, it can be applied to new data locally by an individual on their profile before releasing it. Finally, experiments on real datasets demonstrate the effectiveness of the approach as well as the achievable trade-off between fairness and utility.
Fairwashing: the risk of rationalization
Hiromi Arai
Olivier Fortineau
Sébastien Gambs
Satoshi Hara
Alain Tapp
Black-box explanation is the problem of explaining how a machine learning model -- whose internal logic is hidden to the auditor and general… (voir plus)ly complex -- produces its outcomes. Current approaches for solving this problem include model explanation, outcome explanation as well as model inspection. While these techniques can be beneficial by providing interpretability, they can be used in a negative manner to perform fairwashing, which we define as promoting the false perception that a machine learning model respects some ethical values. In particular, we demonstrate that it is possible to systematically rationalize decisions taken by an unfair black-box model using the model explanation as well as the outcome explanation approaches with a given fairness metric. Our solution, LaundryML, is based on a regularized rule list enumeration algorithm whose objective is to search for fair rule lists approximating an unfair black-box model. We empirically evaluate our rationalization technique on black-box models trained on real-world datasets and show that one can obtain rule lists with high fidelity to the black-box model while being considerably less unfair at the same time.