Portrait de Alexandre Drouin

Alexandre Drouin

Membre industriel associé
Professeur adjoint, Université Laval, Département de génie électrique et de génie informatique
Chercheur scientifique, ServiceNow
Sujets de recherche
Agent basé sur un LLM
Apprentissage profond
Biologie computationnelle
Causalité
Prévision des séries temporelles

Biographie

Alexandre Drouin est chercheur en intelligence artificielle chez ServiceNow Research à Montréal et professeur associé au Département d’informatique et de génie logiciel de l’Université Laval. Il dirige une équipe de recherche qui explore l’utilisation de l’apprentissage automatique pour la prise de décision dans des environnements dynamiques complexes. Son intérêt de recherche principal est la prise de décision causale, dont le but est de répondre à des questions interventionnelles et contrefactuelles en tenant compte des sources d’incertitude potentielles, par exemple l’ambiguïté des relations causales sous-jacentes à un système et l’effet de variables latentes. Il s’intéresse aussi aux modèles de prédiction probabiliste pour les séries temporelles et à leur utilisation pour prédire l’effet à long terme d’actions.

Il est détenteur d’un doctorat en informatique de l’Université Laval, qu’il a reçu pour son travail sur le développement d’algorithmes d’apprentissage automatique pour la découverte de biomarqueurs en génomique et leur application au problème de résistance aux antibiotiques.

Étudiants actuels

Doctorat - UdeM
Superviseur⋅e principal⋅e :
Doctorat - Polytechnique
Co-superviseur⋅e :

Publications

Differentiable Causal Discovery from Interventional Data
Philippe Brouillard
Sébastien Lachapelle
Alexandre Lacoste
Discovering causal relationships in data is a challenging task that involves solving a combinatorial problem for which the solution is not a… (voir plus)lways identifiable. A new line of work reformulates the combinatorial problem as a continuous constrained optimization one, enabling the use of different powerful optimization techniques. However, methods based on this idea do not yet make use of interventional data, which can significantly alleviate identifiability issues. In this work, we propose a neural network-based method for this task that can leverage interventional data. We illustrate the flexibility of the continuous-constrained framework by taking advantage of expressive neural architectures such as normalizing flows. We show that our approach compares favorably to the state of the art in a variety of settings, including perfect and imperfect interventions for which the targeted nodes may even be unknown.
G RADIENT -B ASED N EURAL DAG L EARNING WITH I NTERVENTIONS
Philippe Brouillard
Sébastien Lachapelle
Alexandre Lacoste
Decision making based on statistical association alone can be a dangerous endeavor due to non-causal associations. Ideally, one would rely o… (voir plus)n causal relationships that enable reasoning about the effect of interventions. Several methods have been proposed to discover such relationships from observational and inter-ventional data. Among them, GraN-DAG, a method that relies on the constrained optimization of neural networks, was shown to produce state-of-the-art results among algorithms relying purely on observational data. However, it is limited to observational data and cannot make use of interventions. In this work, we extend GraN-DAG to support interventional data and show that this improves its ability to infer causal structures
In Search of Robust Measures of Generalization
Brady Neal
Nitarshan Rajkumar
Ethan Caballero
Linbo Wang
Daniel M. Roy
One of the principal scientific challenges in deep learning is explaining generalization, i.e., why the particular way the community now tra… (voir plus)ins networks to achieve small training error also leads to small error on held-out data from the same population. It is widely appreciated that some worst-case theories -- such as those based on the VC dimension of the class of predictors induced by modern neural network architectures -- are unable to explain empirical performance. A large volume of work aims to close this gap, primarily by developing bounds on generalization error, optimization error, and excess risk. When evaluated empirically, however, most of these bounds are numerically vacuous. Focusing on generalization bounds, this work addresses the question of how to evaluate such bounds empirically. Jiang et al. (2020) recently described a large-scale empirical study aimed at uncovering potential causal relationships between bounds/measures and generalization. Building on their study, we highlight where their proposed methods can obscure failures and successes of generalization measures in explaining generalization. We argue that generalization measures should instead be evaluated within the framework of distributional robustness.
Synbols: Probing Learning Algorithms with Synthetic Datasets
Alexandre Lacoste
Pau Rodr'iguez
Frédéric Branchaud-charron
Parmida Atighehchian
Massimo Caccia
Issam Hadj Laradji
Matt P. Craddock
David Vazquez