Publications

A Novel Information-Theoretic Objective to Disentangle Representations for Fair Classification
Pierre Colombo
Nathan Noiry
Guillaume Staerman
Fundamental Limits of Membership Inference Attacks on Machine Learning Models
Eric Aubinais
Elisabeth Gassiat
Membership inference attacks (MIA) can reveal whether a particular data point was part of the training dataset, potentially exposing sensiti… (see more)ve information about individuals. This article provides theoretical guarantees by exploring the fundamental statistical limitations associated with MIAs on machine learning models. More precisely, we first derive the statistical quantity that governs the effectiveness and success of such attacks. We then deduce that in a very general regression setting with overfitting algorithms, attacks may have a high probability of success. Finally, we investigate several situations for which we provide bounds on this quantity of interest. Our results enable us to deduce the accuracy of potential attacks based on the number of samples and other structural parameters of learning models. In certain instances, these parameters can be directly estimated from the dataset.
Surprise-Adaptive Intrinsic Motivation for Unsupervised Reinforcement Learning
Adriana Hugessen
Roger Creus Castanyer
Audio Editing with Non-Rigid Text Prompts
Francesco Paissan
Zhepei Wang
Paris Smaragdis
In this paper, we explore audio-editing with non-rigid text edits. We show that the proposed editing pipeline is able to create audio edits … (see more)that remain faithful to the input audio. We explore text prompts that perform addition, style transfer, and in-painting. We quantitatively and qualitatively show that the edits are able to obtain results which outperform Audio-LDM, a recently released text-prompted audio generation model. Qualitative inspection of the results points out that the edits given by our approach remain more faithful to the input audio in terms of keeping the original onsets and offsets of the audio events.
Detection and evaluation of bias-inducing features in machine learning
Moses Openja
gabriel laberge
Graph embedding and transfer learning can help predict potential species interaction networks despite data limitations
Tanya Strydom
Salomé Bouskila
Francis Banville
Ceres Barros
Dominique Caron
Maxwell J. Farrell
Marie‐Josée Fortin
Benjamin Mercier
Rogini Runghen
Giulio V. Dalla Riva
Timothée Poisot
Metawebs (networks of potential interactions within a species pool) are a powerful abstraction to understand how large‐scale species inter… (see more)action networks are structured. Because metawebs are typically expressed at large spatial and taxonomic scales, assembling them is a tedious and costly process; predictive methods can help circumvent the limitations in data deficiencies, by providing a first approximation of metawebs. One way to improve our ability to predict metawebs is to maximize available information by using graph embeddings, as opposed to an exhaustive list of species interactions. Graph embedding is an emerging field in machine learning that holds great potential for ecological problems. Here, we outline how the challenges associated with inferring metawebs line‐up with the advantages of graph embeddings; followed by a discussion as to how the choice of the species pool has consequences on the reconstructed network, specifically as to the role of human‐made (or arbitrarily assigned) boundaries and how these may influence ecological hypotheses.
Mean-field games among teams
Jayakumar Subramanian
Akshat Kumar
Open-Set Multivariate Time-Series Anomaly Detection
Thomas Lai
Thi Kieu Khanh Ho
A Persuasive Approach to Combating Misinformation
Safwan Hossain
Andjela Mladenovic
Yiling Chen
Bayesian Persuasion is proposed as a tool for social media platforms to combat the spread of misinformation. Since platforms can use machine… (see more) learning to predict the popularity and misinformation features of to-be-shared posts, and users are largely motivated to share popular content, platforms can strategically signal this informational advantage to change user beliefs and persuade them not to share misinformation. We characterize the optimal signaling scheme with imperfect predictions as a linear program and give sufficient and necessary conditions on the classifier to ensure optimal platform utility is non-decreasing and continuous. Next, this interaction is considered under a performative model, wherein platform intervention affects the user's future behaviour. The convergence and stability of optimal signaling under this performative process are fully characterized. Lastly, we experimentally validate that our approach significantly reduces misinformation in both the single round and performative setting and discuss the broader scope of using information design to combat misinformation.
Studying the characteristics of AIOps projects on GitHub
Roozbeh Aghili
Heng Li
Interpreting and Controlling Vision Foundation Models via Text Explanations
Haozhe Chen
Junfeng Yang
Carl Vondrick
Transparent Anomaly Detection via Concept-based Explanations
Laya Rafiee Sevyeri
Ivaxi Sheth
Farhood Farahnak