Publications

Multilingual Hallucination Gaps
Cléa Chataigner
Performative Prediction on Games and Mechanism Design
Mehrnaz Mofakhami
Fernando P. Santos
Planning and Learning in Risk-Aware Restless Multi-Arm Bandits
Yossiri Adulyasak
Privacy-Preserving Group Fairness in Cross-Device Federated Learning
Sikha Pentyala
Nicola Neophytou
Anderson Nascimento
Martine De Cock
Group fairness ensures that the outcome of machine learning (ML) based decision making systems are notbiased towards a certain group of peop… (voir plus)le defined by a sensitive attribute such as gender or ethnicity. Achievinggroup fairness in Federated Learning (FL) is challenging because mitigating bias inherently requires usingthe sensitive attribute values of all clients, while FL is aimed precisely at protecting privacy by not givingaccess to the clients’ data. As we show in this paper, this conflict between fairness and privacy in FL can beresolved by combining FL with Secure Multiparty Computation (MPC) and Differential Privacy (DP). Tothis end, we propose a privacy-preserving approach to calculate group fairness notions in the cross-device FLsetting. Then, we propose two bias mitigation pre-processing and post-processing techniques in cross-deviceFL under formal privacy guarantees, without requiring the clients to disclose their sensitive attribute values.Empirical evaluations on real world datasets demonstrate the effectiveness of our solution to train fair andaccurate ML models in federated cross-device setups with privacy guarantees to the users.
Q-learning for Quantile MDPs: A Decomposition, Performance, and Convergence Analysis
Jia Lin Hau
Mohammad Ghavamzadeh
Marek Petrik
Representation Learning via Non-Contrastive Mutual Information
Zhaohan Daniel Guo
Bernardo Avila Pires
Dale Schuurmans
Bo Dai
Representation Learning via Non-Contrastive Mutual Information
Zhaohan Daniel Guo
Bernardo Avila Pires
Dale Schuurmans
Bo Dai
On the Identifiability of Causal Abstractions
Sékou-Oumar Kaba
Causal representation learning (CRL) enhances machine learning models' robustness and generalizability by learning structural causal models … (voir plus)associated with data-generating processes. We focus on a family of CRL methods that uses contrastive data pairs in the observable space, generated before and after a random, unknown intervention, to identify the latent causal model. (Brehmer et al., 2022) showed that this is indeed possible, given that all latent variables can be intervened on individually. However, this is a highly restrictive assumption in many systems. In this work, we instead assume interventions on arbitrary subsets of latent variables, which is more realistic. We introduce a theoretical framework that calculates the degree to which we can identify a causal model, given a set of possible interventions, up to an abstraction that describes the system at a higher level of granularity.
LLMs are Greedy Agents: Effects of RL Fine-tuning on Decision-Making Abilities
Thomas Schmied
Jorg Bornschein
Jordi Grau-Moya
Markus Wulfmeier
Neural Kinematic Bases for Fluids
Yibo Liu
Paul Kry
Kenny Erleben
Sune Darkner
Teseo Schneider
Refining sequence-to-expression modelling with chromatin accessibility
Gregory Fonseca
Cortical differences across psychiatric disorders and associated common and rare genetic variants
Kuldeep Kumar
Zhijie Liao
Jakub Kopal
Clara Moreau
Christopher R. K. Ching
Claudia Modenato
Will Snyder
Sayeh Kazem
Charles-Olivier Martin
C.O. Martin
Anne-Marie Bélanger
Valérie K. Fontaine
Khadije Jizi
Rune Boen
Zohra Saci
Leila Kushan
Ana I. Silva
Marianne B.M. van den Bree
David E.J. Linden … (voir 16 de plus)
Michael J. Owen
Jeremy Hall
Sarah Lippé
Bogdan Draganski
Laura Almasy
Sophia I. Thomopoulos
Neda Jahanshad
Ida E. Sønderby
Ole A. Andreassen
David C. Glahn
Armin Raznahan
Carrie Bearden
Tomas Paus
Paul M. Thompson
Sébastien Jacquemont