Portrait of Ulrich Aivodji

Ulrich Aivodji

Associate Academic Member
Assistant Professor, École de technologie supérieure (ETS), Department of Software and Information Technology Engineering

Biography

Ulrich Aïvodji is an assistant professor of computer science in the Software and Information Technology Engineering Department of the École de technologie supérieure (ÉTS) in Montréal. He also leads the Trustworthy Information Systems Lab (TISL).

Aïvodji’s research areas are computer security, data privacy, optimization and machine learning. His current research focuses on several aspects of trustworthy machine learning, such as fairness, privacy-preserving machine learning and explainability.

Before his current position, he was a postdoctoral researcher at Université du Québec à Montréal, where he worked with Sébastien Gambs on machine learning ethics and privacy.

He earned his PhD in computer science from Université Paul-Sabatier (Toulouse) under the supervision of Marie-José Huguet and Marc-Olivier Killijian. He was affiliated with two research groups at the Systems Analysis and Architecture Laboratory–CNRS, one on dependable computing, fault tolerance and operations research, and another on combinatorial optimization and constraints.

Current Students

PhD - École de technologie suprérieure
Master's Research - École de technologie suprérieure
PhD - École de technologie suprérieure
Co-supervisor :

Publications

Fairness Under Demographic Scarce Regime
Patrik Joslin Kenfack
Most existing works on fairness assume the model has full access to demographic information. However, there exist scenarios where demographi… (see more)c information is partially available because a record was not maintained throughout data collection or due to privacy reasons. This setting is known as demographic scarce regime. Prior research have shown that training an attribute classifier to replace the missing sensitive attributes (proxy) can still improve fairness. However, the use of proxy-sensitive attributes worsens fairness-accuracy trade-offs compared to true sensitive attributes. To address this limitation, we propose a framework to build attribute classifiers that achieve better fairness-accuracy trade-offs. Our method introduces uncertainty awareness in the attribute classifier and enforces fairness on samples with demographic information inferred with the lowest uncertainty. We show empirically that enforcing fairness constraints on samples with uncertain sensitive attributes is detrimental to fairness and accuracy. Our experiments on two datasets showed that the proposed framework yields models with significantly better fairness-accuracy trade-offs compared to classic attribute classifiers. Surprisingly, our framework outperforms models trained with constraints on the true sensitive attributes.