Portrait of Pablo Piantanida

Pablo Piantanida

Associate Academic Member
Full Professor, Université Paris-Saclay
Director, International Laboratory on Learning Systems (ILLS), McGill University
Associate professor, École de technologie supérieure (ETS), Department of Systems Engineering

Biography

I am a professor at CentraleSupélec (Université Paris-Saclay) with the French National Centre for Scientific Research (CNRS), and Director of the International Laboratory on Learning Systems (ILLS) which gathers McGill University, École de technologie supérieure (ÉTS), Mila – Quebec AI Institute, France’s Centre Nationale de la Recherche Scientifique (CNRS), Université Paris-Saclay, and the École CentraleSupélec.

My research revolves around the application of advanced statistical and information-theoretic techniques to the field of machine learning. I am interested in developing rigorous techniques based on information measures and concepts for building safe and trustworthy AI systems and establishing confidence in their behavior and robustness, thereby securing their use in society. My primary areas of expertise include information theory, information geometry, learning theory, privacy, fairness, with applications to computer vision and natural language processing.

I obtained my undergraduate education at the University of Buenos Aires and pursued graduate studies in applied mathematics at Paris-Saclay University in France. Throughout my career, I have also held visiting positions at INRIA, Université de Montréal and Ecole de Technologie Supérieure (ÉTS), among others.

My earlier research encompassed the fields of information theory beyond distributed compression, statistical decision, universal source coding, cooperation, feedback, index coding, key generation, security, and privacy, among others.

I teach courses on machine learning, information theory and deep learning, covering topics such as statistical learning theory, information measures, statistical principles of neural networks.

Current Students

Eric Aubinais
Visiting Researcher - Université Paris-Saclay
eric.aubinais@mila.quebec
Maxime Darrin
PhD - McGill University
Principal supervisor :
maxime.darrin@mila.quebec
Zahra Dehghani Tafti
PhD
zahra.dehghani-tafti@mila.quebec
Philippe Formont
PhD
philippe.formont@mila.quebec

Publications

Is Meta-training Really Necessary for Molecular Few-Shot Learning ?
Philippe Formont
Hugo Jeannin
Ismail Ben Ayed
Few-shot learning has recently attracted significant interest in drug discovery, with a recent, fast-growing literature mostly involving con… (see more)voluted meta-learning strategies. We revisit the more straightforward fine-tuning approach for molecular data, and propose a regularized quadratic-probe loss based on the the Mahalanobis distance. We design a dedicated block-coordinate descent optimizer, which avoid the degenerate solutions of our loss. Interestingly, our simple fine-tuning approach achieves highly competitive performances in comparison to state-of-the-art methods, while being applicable to black-box settings and removing the need for specific episodic pre-training strategies. Furthermore, we introduce a new benchmark to assess the robustness of the competing methods to domain shifts. In this setting, our fine-tuning baseline obtains consistently better results than meta-learning methods.
COSMIC: Mutual Information for Task-Agnostic Summarization Evaluation
Maxime Darrin
Philippe Formont
Jackie Chi Kit Cheung
Assessing the quality of summarizers poses significant challenges. In response, we propose a novel task-oriented evaluation approach that as… (see more)sesses summarizers based on their capacity to produce summaries that are useful for downstream tasks, while preserving task outcomes. We theoretically establish a direct relationship between the resulting error probability of these tasks and the mutual information between source texts and generated summaries. We introduce
Optimal Zero-Shot Detector for Multi-Armed Attacks
Federica Granese
Marco Romanelli
This paper explores a scenario in which a malicious actor employs a multi-armed attack strategy to manipulate data samples, offering them va… (see more)rious avenues to introduce noise into the dataset. Our central objective is to protect the data by detecting any alterations to the input. We approach this defensive strategy with utmost caution, operating in an environment where the defender possesses significantly less information compared to the attacker. Specifically, the defender is unable to utilize any data samples for training a defense model or verifying the integrity of the channel. Instead, the defender relies exclusively on a set of pre-existing detectors readily available"off the shelf". To tackle this challenge, we derive an innovative information-theoretic defense approach that optimally aggregates the decisions made by these detectors, eliminating the need for any training data. We further explore a practical use-case scenario for empirical evaluation, where the attacker possesses a pre-trained classifier and launches well-known adversarial attacks against it. Our experiments highlight the effectiveness of our proposed solution, even in scenarios that deviate from the optimal setup.
On the Stability of a non-hyperbolic nonlinear map with non-bounded set of non-isolated fixed points with applications to Machine Learning
Roberta Hansen
Matias Vera
Lautaro Estienne
Luciana Ferrer
Preserving Privacy in GANs Against Membership Inference Attack
Mohammadhadi Shateri
Francisco Messina
Fabrice Labeau
Generative Adversarial Networks (GANs) have been widely used for generating synthetic data for cases where there is a limited size real-worl… (see more)d data set or when data holders are unwilling to share their data samples. Recent works showed that GANs, due to overfitting and memorization, might leak information regarding their training data samples. This makes GANs vulnerable to Membership Inference Attacks (MIAs). Several defense strategies have been proposed in the literature to mitigate this privacy issue. Unfortunately, defense strategies based on differential privacy are proven to reduce extensively the quality of the synthetic data points. On the other hand, more recent frameworks such as PrivGAN and PAR-GAN are not suitable for small-size training data sets. In the present work, the overfitting in GANs is studied in terms of the discriminator, and a more general measure of overfitting based on the Bhattacharyya coefficient is defined. Then, inspired by Fano’s inequality, our first defense mechanism against MIAs is proposed. This framework, which requires only a simple modification in the loss function of GANs, is referred to as the maximum entropy GAN or MEGAN and significantly improves the robustness of GANs to MIAs. As a second defense strategy, a more heuristic model based on minimizing the information leaked from the generated samples about the training data points is presented. This approach is referred to as mutual information minimization GAN (MIMGAN) and uses a variational representation of the mutual information to minimize the information that a synthetic sample might leak about the whole training data set. Applying the proposed frameworks to some commonly used data sets against state-of-the-art MIAs reveals that the proposed methods can reduce the accuracy of the adversaries to the level of random guessing accuracy with a small reduction in the quality of the synthetic data samples.
Rainproof: An Umbrella To Shield Text Generators From Out-Of-Distribution Data
Maxime Darrin
Pierre Colombo
Implementing effective control mechanisms to ensure the proper functioning and security of deployed NLP models, from translation to chatbots… (see more), is essential. A key ingredient to ensure safe system behaviour is Out-Of-Distribution (OOD) detection, which aims to detect whether an input sample is statistically far from the training distribution. Although OOD detection is a widely covered topic in classification tasks, most methods rely on hidden features output by the encoder. In this work, we focus on leveraging soft-probabilities in a black-box framework, i.e. we can access the soft-predictions but not the internal states of the model. Our contributions include: (i) RAINPROOF a Relative informAItioN Projection OOD detection framework; and (ii) a more operational evaluation setting for OOD detection. Surprisingly, we find that OOD detection is not necessarily aligned with task-specific measures. The OOD detector may filter out samples well processed by the model and keep samples that are not, leading to weaker performance. Our results show that RAINPROOF provides OOD detection methods more aligned with task-specific performance metrics than traditional OOD detectors.
Toward Stronger Textual Attack Detectors
Pierre Colombo
Marine Picot
Nathan Noiry
Guillaume Staerman
The landscape of available textual adversarial attacks keeps growing, posing severe threats and raising concerns regarding the deep NLP syst… (see more)em's integrity. However, the crucial problem of defending against malicious attacks has only drawn the attention of the NLP community. The latter is nonetheless instrumental in developing robust and trustworthy systems. This paper makes two important contributions in this line of search: (i) we introduce LAROUSSE, a new framework to detect textual adversarial attacks and (ii) we introduce STAKEOUT, a new benchmark composed of nine popular attack methods, three datasets, and two pre-trained models. LAROUSSE is ready-to-use in production as it is unsupervised, hyperparameter-free, and non-differentiable, protecting it against gradient-based methods. Our new benchmark STAKEOUT allows for a robust evaluation framework: we conduct extensive numerical experiments which demonstrate that LAROUSSE outperforms previous methods, and which allows to identify interesting factors of detection rate variations.
A Novel Information-Theoretic Objective to Disentangle Representations for Fair Classification
Pierre Colombo
Nathan Noiry
Guillaume Staerman
Fundamental Limits of Membership Inference Attacks on Machine Learning Models
Eric Aubinais
Elisabeth Gassiat
Membership inference attacks (MIA) can reveal whether a particular data point was part of the training dataset, potentially exposing sensiti… (see more)ve information about individuals. This article provides theoretical guarantees by exploring the fundamental statistical limitations associated with MIAs on machine learning models. More precisely, we first derive the statistical quantity that governs the effectiveness and success of such attacks. We then deduce that in a very general regression setting with overfitting algorithms, attacks may have a high probability of success. Finally, we investigate several situations for which we provide bounds on this quantity of interest. Our results enable us to deduce the accuracy of potential attacks based on the number of samples and other structural parameters of learning models. In certain instances, these parameters can be directly estimated from the dataset.
Transductive Learning for Textual Few-Shot Classification in API-based Embedding Models
Pierre Colombo
Victor Pellegrain
Malik Boudiaf
Victor Storchan
Myriam Tami
Ismail Ben Ayed
C'eline Hudelot
Proprietary and closed APIs are becoming increasingly common to process natural language, and are impacting the practical applications of na… (see more)tural language processing, including few-shot classification. Few-shot classification involves training a model to perform a new classification task with a handful of labeled data. This paper presents three contributions. First, we introduce a scenario where the embedding of a pre-trained model is served through a gated API with compute-cost and data-privacy constraints. Second, we propose a transductive inference, a learning paradigm that has been overlooked by the NLP community. Transductive inference, unlike traditional inductive learning, leverages the statistics of unlabeled data. We also introduce a new parameter-free transductive regularizer based on the Fisher-Rao loss, which can be used on top of the gated API embeddings. This method fully utilizes unlabeled data, does not share any label with the third-party API provider and could serve as a baseline for future research. Third, we propose an improved experimental setting and compile a benchmark of eight datasets involving multiclass classification in four different languages, with up to 151 classes. We evaluate our methods using eight backbone models, along with an episodic evaluation over 1,000 episodes, which demonstrate the superiority of transductive inference over the standard inductive setting.
A Halfspace-Mass Depth-Based Method for Adversarial Attack Detection
Marine Picot
Federica Granese
Guillaume Staerman
Marco Romanelli
Francisco Messina
Pierre Colombo
On the (Im)Possibility of Estimating Various Notions of Differential Privacy (short paper)
Daniele Gorla
Louis Jalouzot
Federica Granese
C. Palamidessi
We analyze to what extent final users can infer information about the level of protection of their data when the data obfuscation mechanism … (see more)is a priori unknown to them (the so-called “black-box" scenario). In particular, we delve into the investigation of two notions of local differential privacy (LDP), namely 𝜀 -LDP and Rényi LDP. On one hand, we prove that, without any assumption on the underlying distributions, it is not possible to have an algorithm able to infer the level of data protection with provable guarantees. On the other hand, we demonstrate that, under reasonable assumptions (namely, Lipschitzness of the involved densities on a closed interval), such guarantees exist and can be achieved by a simple histogram-based estimator.