Portrait of Laurent Charlin

Laurent Charlin

Core Academic Member
Canada CIFAR AI Chair
Associate Professor, HEC Montréal, Department of Decision Sciences
Associate Professor, Université de Montréal, Department of Computer Science and Operations Research
Research Topics
Causality
Data Mining
Deep Learning
Generative Models
Graph Neural Networks
Information Retrieval
Natural Language Processing
Probabilistic Models
Recommender Systems
Reinforcement Learning
Representation Learning

Biography

Laurent Charlin is a Canada CIFAR AI Chair and core academic member of Mila – Quebec Artificial Intelligence Institute, as well as an associate professor at HEC Montréal, the business school affiliated with Université de Montréal.

Charlin’s research focuses on developing novel machine learning models to aid in decision-making. Recent work has focused on learning from data that changes over time, and on applications in fields such as recommender systems and optimization.

He has a number of highly cited publications on dialogue systems (chatbots). He co-developed the Toronto Paper Matching System (TPMS), which has been widely used by computer science conferences for matching reviewers to papers. He has also given MOOCs, introductory talks and media interviews to contribute to knowledge transfer and improve AI literacy.

Current Students

Postdoctorate - HEC Montréal
Co-supervisor :
Master's Research - HEC Montréal
PhD - Université de Montréal
PhD - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Postdoctorate
PhD - Université Laval
Principal supervisor :
PhD - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Co-supervisor :
PhD - Concordia University
Principal supervisor :
PhD - Université de Montréal
PhD - Université de Montréal
PhD - Université de Montréal

Publications

Pretraining Representations for Data-Efficient Reinforcement Learning
Max Schwarzer
Nitarshan Rajkumar
Michael Noukhovitch
Ankesh Anand
Philip Bachman
Data efficiency is a key challenge for deep reinforcement learning. We address this problem by using unlabeled data to pretrain an encoder w… (see more)hich is then finetuned on a small amount of task-specific data. To encourage learning representations which capture diverse aspects of the underlying MDP, we employ a combination of latent dynamics modelling and unsupervised goal-conditioned RL. When limited to 100k steps of interaction on Atari games (equivalent to two hours of human experience), our approach significantly surpasses prior work combining offline representation pretraining with task-specific finetuning, and compares favourably with other pretraining methods that require orders of magnitude more data. Our approach shows particular promise when combined with larger models as well as more diverse, task-aligned observational data -- approaching human-level performance and data-efficiency on Atari in our best setting.
The Machine Learning for Combinatorial Optimization Competition (ML4CO): Results and Insights
Simon Bowly
Jonas Charfreitag
Didier Chételat
Antonia Chmiela
Justin Dumouchelle
Ambros Gleixner
Aleksandr Kazachkov
Elias Boutros Khalil
Paweł Lichocki
Miles Lubin
Chris J. Maddison
Christopher Morris
D. Papageorgiou
Augustin Parjadis
Sebastian Pokutta
Antoine Prouvost … (see 22 more)
Lara Scavuzzo
Giulia Zarpellon
Linxin Yangm
Sha Lai
Akang Wang
Xiaodong Luo
Xiang Zhou
Haohan Huang
Sheng Cheng Shao
Yuanming Zhu
Dong Dong Zhang
Tao Manh Quan
Zixuan Cao
Yang Xu
Zhewei Huang
Shuchang Zhou
C. Binbin
He Minggui
Haoren Ren Hao
Zhang Zhiyu
An Zhiwu
Mao Kun
Combinatorial optimization is a well-established area in operations research and computer science. Until recently, its methods have focused … (see more)on solving problem instances in isolation, ignoring that they often stem from related data distributions in practice. However, recent years have seen a surge of interest in using machine learning as a new approach for solving combinatorial problems, either directly as solvers or by enhancing exact solvers. Based on this context, the ML4CO aims at improving state-of-the-art combinatorial optimization solvers by replacing key heuristic components. The competition featured three challenging tasks: finding the best feasible solution, producing the tightest optimality certificate, and giving an appropriate solver configuration. Three realistic datasets were considered: balanced item placement, workload apportionment, and maritime inventory routing. This last dataset was kept anonymous for the contestants.
Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles
Yao Lu
Yue Dong
Multi-document summarization is a challenging task for which there exists little large-scale datasets. We propose Multi-XScience, a large-sc… (see more)ale multi-document summarization dataset created from scientific articles. Multi-XScience introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references. Our work is inspired by extreme summarization, a dataset construction protocol that favours abstractive modeling approaches. Descriptive statistics and empirical results—using several state-of-the-art models trained on the Multi-XScience dataset—reveal that Multi-XScience is well suited for abstractive models.
A Large-Scale, Open-Domain, Mixed-Interface Dialogue-Based ITS for STEM
Iulian V. Serban
Varun Gupta
Ekaterina Kochmar
Dung D. Vu
Robert Belfer
Inference for travel time on transportation networks
Mohamad Elmasri
Aurélie Labbe
Denis Larocque
Travel time is essential for making travel decisions in real-world transportation networks. Understanding its distribution can resolve many … (see more)fundamental problems in transportation. Empirically, single-edge travel-time is well studied, but how to aggregate such information over many edges to arrive at the distribution of travel time over a route is still daunting. A range of statistical tools have been developed for network analysis; tools to study statistical behaviors of processes on dynamical networks are still lacking. This paper develops a novel statistical perspective to specific type of mixing ergodic processes (travel time), that mimic the behavior of travel time on real-world networks. Under general conditions on the single-edge speed (resistance) distribution, we show that travel time, normalized by distance, follows a Gaussian distribution with universal mean and variance parameters. We propose efficient inference methods for such parameters, and consequently asymptotic universal confidence and prediction intervals of travel time. We further develop path(route)-specific parameters that enable tighter Gaussian-based prediction intervals. We illustrate our methods with a real-world case study using mobile GPS data, where we show that the route-specific and universal intervals both achieve the 95\% theoretical coverage levels. Moreover, the route-specific prediction intervals result in tighter bounds that outperform competing models.
Prediction intervals for travel time on transportation networks
Mohamad Elmasri
Aurélie Labbe
Denis Larocque
Estimating travel-time is essential for making travel decisions in transportation networks. Empirically, single road-segment travel-time is … (see more)well studied, but how to aggregate such information over many edges to arrive at the distribution of travel time over a route is still theoretically challenging. Understanding travel-time distribution can help resolve many fundamental problems in transportation, quantifying travel uncertainty as an example. We develop a novel statistical perspective to specific types of dynamical processes that mimic the behavior of travel time on real-world networks. We show that, under general conditions, travel-time normalized by distance, follows a Gaussian distribution with route-invariant (universal) location and scale parameters. We develop efficient inference methods for such parameters, with which we propose asymptotic universal confidence and prediction intervals of travel time. We further develop our theory to include road-segment level information to construct route-specific location and scale parameter sequences that produce tighter route-specific Gaussian-based prediction intervals. We illustrate our methods with a real-world case study using precollected mobile GPS data, where we show that the route-specific and route-invariant intervals both achieve the 95\% theoretical coverage levels, where the former result in tighter bounds that also outperform competing models.
Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning
Massimo Caccia
Pau Rodriguez
Oleksiy Ostapenko
Fabrice Normandin
Min Lin
Lucas Caccia
Issam Hadj Laradji
Alexande Lacoste
David Vazquez
IG-RL: Inductive Graph Reinforcement Learning for Massive-Scale Traffic Signal Control
François-Xavier Devailly
Denis Larocque
Scaling adaptive traffic signal control involves dealing with combinatorial state and action spaces. Multi-agent reinforcement learning atte… (see more)mpts to address this challenge by distributing control to specialized agents. However, specialization hinders generalization and transferability, and the computational graphs underlying neural-network architectures—dominating in the multi-agent setting—do not offer the flexibility to handle an arbitrary number of entities which changes both between road networks, and over time as vehicles traverse the network. We introduce Inductive Graph Reinforcement Learning (IG-RL) based on graph-convolutional networks which adapts to the structure of any road network, to learn detailed representations of traffic signal controllers and their surroundings. Our decentralized approach enables learning of a transferable-adaptive-traffic-signal-control policy. After being trained on an arbitrary set of road networks, our model can generalize to new road networks and traffic distributions, with no additional training and a constant number of parameters, enabling greater scalability compared to prior methods. Furthermore, our approach can exploit the granularity of available data by capturing the (dynamic) demand at both the lane level and the vehicle level. The proposed method is tested on both road networks and traffic settings never experienced during training. We compare IG-RL to multi-agent reinforcement learning and domain-specific baselines. In both synthetic road networks and in a larger experiment involving the control of the 3,971 traffic signals of Manhattan, we show that different instantiations of IG-RL outperform baselines.
Language GANs Falling Short
Massimo Caccia
Lucas Caccia
William Fedus
Generating high-quality text with sufficient diversity is essential for a wide range of Natural Language Generation (NLG) tasks. Maximum-Lik… (see more)elihood (MLE) models trained with teacher forcing have consistently been reported as weak baselines, where poor performance is attributed to exposure bias (Bengio et al., 2015; Ranzato et al., 2015); at inference time, the model is fed its own prediction instead of a ground-truth token, which can lead to accumulating errors and poor samples. This line of reasoning has led to an outbreak of adversarial based approaches for NLG, on the account that GANs do not suffer from exposure bias. In this work, we make several surprising observations which contradict common beliefs. First, we revisit the canonical evaluation framework for NLG, and point out fundamental flaws with quality-only evaluation: we show that one can outperform such metrics using a simple, well-known temperature parameter to artificially reduce the entropy of the model's conditional distributions. Second, we leverage the control over the quality / diversity trade-off given by this parameter to evaluate models over the whole quality-diversity spectrum and find MLE models constantly outperform the proposed GAN variants over the whole quality-diversity space. Our results have several implications: 1) The impact of exposure bias on sample quality is less severe than previously thought, 2) temperature tuning provides a better quality / diversity trade-off than adversarial training while being easier to train, easier to cross-validate, and less computationally expensive. Code to reproduce the experiments is available at github.com/pclucas14/GansFallingShort
Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning.
Massimo Caccia
Pau Rodriguez
Oleksiy Ostapenko
Fabrice Normandin
Min Lin
Lucas Caccia
Issam Hadj Laradji
Alexandre Lacoste
David Vazquez
Synbols: Probing Learning Algorithms with Synthetic Datasets
Alexandre Lacoste
Pau Rodr'iguez
Frédéric Branchaud-charron
Parmida Atighehchian
Massimo Caccia
Issam Hadj Laradji
Matt P. Craddock
David Vazquez
On the Effectiveness of Two-Step Learning for Latent-Variable Models
Latent-variable generative models offer a principled solution for modeling and sampling from complex probability distributions. Implementing… (see more) a joint training objective with a complex prior, however, can be a tedious task, as one is typically required to derive and code a specific cost function for each new type of prior distribution. In this work, we propose a general framework for learning latent variable generative models in a two-step fashion. In the first step of the framework, we train an autoencoder, and in the second step we fit a prior model on the resulting latent distribution. This two-step approach offers a convenient alternative to joint training, as it allows for a straightforward combination of existing models without the hustle of deriving new cost functions, and the need for coding the joint training objectives. Through a set of experiments, we demonstrate that two-step learning results in performances similar to joint training, and in some cases even results in more accurate modeling.