Portrait of Ulrich Aivodji

Ulrich Aivodji

Associate Academic Member
Assistant Professor, École de technologie supérieure (ETS), Department of Software and Information Technology Engineering
École de technologie supérieure
Research Topics
Data Mining
Deep Learning
Optimization
Representation Learning

Biography

Ulrich Aïvodji is an assistant professor of computer science in the Software and Information Technology Engineering Department of the École de technologie supérieure (ÉTS) in Montréal. He also leads the Trustworthy Information Systems Lab (TISL).

Aïvodji’s research areas are computer security, data privacy, optimization and machine learning. His current research focuses on several aspects of trustworthy machine learning, such as fairness, privacy-preserving machine learning and explainability.

Before his current position, he was a postdoctoral researcher at Université du Québec à Montréal, where he worked with Sébastien Gambs on machine learning ethics and privacy.

He earned his PhD in computer science from Université Paul-Sabatier (Toulouse) under the supervision of Marie-José Huguet and Marc-Olivier Killijian. He was affiliated with two research groups at the Systems Analysis and Architecture Laboratory–CNRS, one on dependable computing, fault tolerance and operations research, and another on combinatorial optimization and constraints.

Current Students

PhD - École de technologie suprérieure
Master's Research - École de technologie suprérieure
Postdoctorate - École de technologie suprérieure
Master's Research - École de technologie suprérieure
Master's Research - École de technologie suprérieure
Co-supervisor :
PhD - École de technologie suprérieure
Co-supervisor :
Research Intern - École de technologie suprérieure (ÉTS)
Research Intern - École de technologie suprérieure (ÉTS)
Collaborating researcher - Université de Montréal
PhD - École de technologie suprérieure

Publications

Local Data Debiasing for Fairness Based on Generative Adversarial Training
François Bidet
Sébastien Gambs
Rosin Claude Ngueveu
Alain Tapp
The widespread use of automated decision processes in many areas of our society raises serious ethical issues with respect to the fairness o… (see more)f the process and the possible resulting discrimination. To solve this issue, we propose a novel adversarial training approach called GANSan for learning a sanitizer whose objective is to prevent the possibility of any discrimination (i.e., direct and indirect) based on a sensitive attribute by removing the attribute itself as well as the existing correlations with the remaining attributes. Our method GANSan is partially inspired by the powerful framework of generative adversarial networks (in particular Cycle-GANs), which offers a flexible way to learn a distribution empirically or to translate between two different distributions. In contrast to prior work, one of the strengths of our approach is that the sanitization is performed in the same space as the original data by only modifying the other attributes as little as possible, thus preserving the interpretability of the sanitized data. Consequently, once the sanitizer is trained, it can be applied to new data locally by an individual on their profile before releasing it. Finally, experiments on real datasets demonstrate the effectiveness of the approach as well as the achievable trade-off between fairness and utility.
Fairwashing: the risk of rationalization
Hiromi Arai
Olivier Fortineau
Sébastien Gambs
Satoshi Hara
Alain Tapp
Black-box explanation is the problem of explaining how a machine learning model -- whose internal logic is hidden to the auditor and general… (see more)ly complex -- produces its outcomes. Current approaches for solving this problem include model explanation, outcome explanation as well as model inspection. While these techniques can be beneficial by providing interpretability, they can be used in a negative manner to perform fairwashing, which we define as promoting the false perception that a machine learning model respects some ethical values. In particular, we demonstrate that it is possible to systematically rationalize decisions taken by an unfair black-box model using the model explanation as well as the outcome explanation approaches with a given fairness metric. Our solution, LaundryML, is based on a regularized rule list enumeration algorithm whose objective is to search for fair rule lists approximating an unfair black-box model. We empirically evaluate our rationalization technique on black-box models trained on real-world datasets and show that one can obtain rule lists with high fidelity to the black-box model while being considerably less unfair at the same time.