Portrait of Laurent Charlin

Laurent Charlin

Core Academic Member
Canada CIFAR AI Chair
Associate Professor, HEC Montréal, Department of Decision Sciences
Associate Professor, Université de Montréal, Department of Computer Science and Operations Research
Research Topics
AI for Science
Data Mining
Deep Learning
Generative Models
Graph Neural Networks
Information Retrieval
Natural Language Processing
Probabilistic Models
Recommender Systems
Reinforcement Learning
Representation Learning

Biography

Laurent Charlin is the Interim Scientific Director of Mila – Quebec Artificial Intelligence Institute, a Canada CIFAR AI Chair, as well as an associate professor at HEC Montréal, the business school affiliated with Université de Montréal.

Charlin’s research focuses on developing novel machine learning models to aid in decision-making. Recent work has focused on learning from data that changes over time, and on applications in fields such as recommender systems and optimization.

He has a number of highly cited publications on dialogue systems (chatbots). He co-developed the Toronto Paper Matching System (TPMS), which has been widely used by computer science conferences for matching reviewers to papers. He has also given MOOCs, introductory talks and media interviews to contribute to knowledge transfer and improve AI literacy.

Current Students

Master's Research - HEC Montréal
Postdoctorate - HEC Montréal
Co-supervisor :
Master's Research - HEC Montréal
PhD - Université de Montréal
PhD - Université de Montréal
Co-supervisor :
Master's Research - HEC Montréal
PhD - HEC Montréal
Principal supervisor :
PhD - Université Laval
Principal supervisor :
PhD - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Co-supervisor :
PhD - Concordia University
Principal supervisor :
Collaborating Alumni - Université de Montréal
PhD - Université de Montréal
Postdoctorate - HEC Montréal
Co-supervisor :
PhD - Université de Montréal
PhD - Université de Montréal

Publications

Sequoia: A Software Framework to Unify Continual Learning Research
Pau Rodriguez
Matthew D Riemer
J. Hurtado
Lucas Cecchi
Dominic Zhao
Ryan Lindeborg
Timothee LESORT
David Vazquez
Massimo Caccia
The field of Continual Learning (CL) seeks to develop algorithms that accumulate knowledge and skills over time through interaction with non… (see more)-stationary environments. In practice, a plethora of evaluation procedures (settings) and algorithmic solutions (methods) exist, each with their own potentially disjoint set of assumptions. This variety makes measuring progress in CL difficult. We propose a taxonomy of settings, where each setting is described as a set of assumptions. A tree-shaped hierarchy emerges from this view, where more general settings become the parents of those with more restrictive assumptions. This makes it possible to use inheritance to share and reuse research, as developing a method for a given setting also makes it directly applicable onto any of its children. We instantiate this idea as a publicly available software framework called Sequoia, which features a wide variety of settings from both the Continual Supervised Learning (CSL) and Continual Reinforcement Learning (CRL) domains. Sequoia also includes a growing suite of methods which are easy to extend and customize, in addition to more specialized methods from external libraries. We hope that this new paradigm and its first implementation can help unify and accelerate research in CL. You can help us grow the tree by visiting (this GitHub URL).
Comparative Study of Learning Outcomes for Online Learning Platforms
Francois St-Hilaire
Nathan J. Burns
Robert Belfer
Muhammad Shayan
Ariella Smofsky
Dung D. Vu
Antoine Frau
Joseph Potochny
Farid Faraji
Vincent Pavero
Neroli Ko
Ansona Onyi Ching
Sabina Elkins
A. Stepanyan
Adela Matajova
Iulian V. Serban
Ekaterina Kochmar
Continual Learning via Local Module Composition
Pau Rodriguez
Massimo Caccia
Modularity is a compelling solution to continual learning (CL), the problem of modeling sequences of related tasks. Learning and then compos… (see more)ing modules to solve different tasks provides an abstraction to address the principal challenges of CL including catastrophic forgetting, backward and forward transfer across tasks, and sub-linear model growth. We introduce local module composition (LMC), an approach to modular CL where each module is provided a local structural component that estimates a module's relevance to the input. Dynamic module composition is performed layer-wise based on local relevance scores. We demonstrate that agnosticity to task identities (IDs) arises from (local) structural learning that is module-specific as opposed to the task- and/or model-specific as in previous works, making LMC applicable to more CL settings compared to previous works. In addition, LMC also tracks statistics about the input distribution and adds new modules when outlier samples are detected. In the first set of experiments, LMC performs favorably compared to existing methods on the recent Continual Transfer-learning Benchmark without requiring task identities. In another study, we show that the locality of structural learning allows LMC to interpolate to related but unseen tasks (OOD), as well as to compose modular networks trained independently on different task sequences into a third modular network without any fine-tuning. Finally, in search for limitations of LMC we study it on more challenging sequences of 30 and 100 tasks, demonstrating that local module selection becomes much more challenging in presence of a large number of candidate modules. In this setting best performing LMC spawns much fewer modules compared to an oracle based baseline, however, it reaches a lower overall accuracy. The codebase is available under https://github.com/oleksost/LMC.
DATA-EFFICIENT REINFORCEMENT LEARNING
Nitarshan Rajkumar
Ankesh Anand
Philip Bachman
Data efficiency poses a major challenge for deep reinforcement learning. We approach this issue from the perspective of self-supervised repr… (see more)esentation learning, leveraging reward-free exploratory data to pretrain encoder networks. We employ a novel combination of latent dynamics modelling and goal-reaching objectives, which exploit the inherent structure of data in reinforcement learning. We demonstrate that our method scales well with network capacity and pretraining data. When evaluated on the Atari 100k data-efficiency benchmark, our approach significantly outperforms previous methods combining unsupervised pretraining with task-specific finetuning, and approaches human-level performance.
Pretraining Representations for Data-Efficient Reinforcement Learning
Max Schwarzer
Nitarshan Rajkumar
Ankesh Anand
Philip Bachman
Data efficiency is a key challenge for deep reinforcement learning. We address this problem by using unlabeled data to pretrain an encoder w… (see more)hich is then finetuned on a small amount of task-specific data. To encourage learning representations which capture diverse aspects of the underlying MDP, we employ a combination of latent dynamics modelling and unsupervised goal-conditioned RL. When limited to 100k steps of interaction on Atari games (equivalent to two hours of human experience), our approach significantly surpasses prior work combining offline representation pretraining with task-specific finetuning, and compares favourably with other pretraining methods that require orders of magnitude more data. Our approach shows particular promise when combined with larger models as well as more diverse, task-aligned observational data -- approaching human-level performance and data-efficiency on Atari in our best setting.
The Machine Learning for Combinatorial Optimization Competition (ML4CO): Results and Insights
Simon Bowly
Jonas Charfreitag
Didier Chételat
Antonia Chmiela
Justin Dumouchelle
Ambros Gleixner
Aleksandr Kazachkov
Elias Boutros Khalil
Paweł Lichocki
Andrea Lodi
Miles Lubin
Chris J. Maddison
Christopher Morris
D. Papageorgiou
Augustin Parjadis
Sebastian Pokutta
Antoine Prouvost … (see 22 more)
Lara Scavuzzo
Giulia Zarpellon
Linxin Yangm
Sha Lai
Akang Wang
Xiaodong Luo
Xiang Zhou
Haohan Huang
Sheng Cheng Shao
Yuanming Zhu
Dong Dong Zhang
Tao Manh Quan
Zixuan Cao
Yang Xu
Zhewei Huang
Shuchang Zhou
C. Binbin
He Minggui
Haoren Ren Hao
Zhang Zhiyu
An Zhiwu
Mao Kun
Combinatorial optimization is a well-established area in operations research and computer science. Until recently, its methods have focused … (see more)on solving problem instances in isolation, ignoring that they often stem from related data distributions in practice. However, recent years have seen a surge of interest in using machine learning as a new approach for solving combinatorial problems, either directly as solvers or by enhancing exact solvers. Based on this context, the ML4CO aims at improving state-of-the-art combinatorial optimization solvers by replacing key heuristic components. The competition featured three challenging tasks: finding the best feasible solution, producing the tightest optimality certificate, and giving an appropriate solver configuration. Three realistic datasets were considered: balanced item placement, workload apportionment, and maritime inventory routing. This last dataset was kept anonymous for the contestants.
Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles
Yao Lu
Yue Dong
Multi-document summarization is a challenging task for which there exists little large-scale datasets. We propose Multi-XScience, a large-sc… (see more)ale multi-document summarization dataset created from scientific articles. Multi-XScience introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references. Our work is inspired by extreme summarization, a dataset construction protocol that favours abstractive modeling approaches. Descriptive statistics and empirical results—using several state-of-the-art models trained on the Multi-XScience dataset—reveal that Multi-XScience is well suited for abstractive models.
A Large-Scale, Open-Domain, Mixed-Interface Dialogue-Based ITS for STEM
Iulian V. Serban
Varun Gupta
Ekaterina Kochmar
Dung D. Vu
Robert Belfer
Inference for travel time on transportation networks
Mohamad Elmasri
Aurélie Labbe
Denis Larocque
Travel time is essential for making travel decisions in real-world transportation networks. Understanding its distribution can resolve many … (see more)fundamental problems in transportation. Empirically, single-edge travel-time is well studied, but how to aggregate such information over many edges to arrive at the distribution of travel time over a route is still daunting. A range of statistical tools have been developed for network analysis; tools to study statistical behaviors of processes on dynamical networks are still lacking. This paper develops a novel statistical perspective to specific type of mixing ergodic processes (travel time), that mimic the behavior of travel time on real-world networks. Under general conditions on the single-edge speed (resistance) distribution, we show that travel time, normalized by distance, follows a Gaussian distribution with universal mean and variance parameters. We propose efficient inference methods for such parameters, and consequently asymptotic universal confidence and prediction intervals of travel time. We further develop path(route)-specific parameters that enable tighter Gaussian-based prediction intervals. We illustrate our methods with a real-world case study using mobile GPS data, where we show that the route-specific and universal intervals both achieve the 95\% theoretical coverage levels. Moreover, the route-specific prediction intervals result in tighter bounds that outperform competing models.
Prediction intervals for travel time on transportation networks
Mohamad Elmasri
Aurélie Labbe
Denis Larocque
Estimating travel-time is essential for making travel decisions in transportation networks. Empirically, single road-segment travel-time is … (see more)well studied, but how to aggregate such information over many edges to arrive at the distribution of travel time over a route is still theoretically challenging. Understanding travel-time distribution can help resolve many fundamental problems in transportation, quantifying travel uncertainty as an example. We develop a novel statistical perspective to specific types of dynamical processes that mimic the behavior of travel time on real-world networks. We show that, under general conditions, travel-time normalized by distance, follows a Gaussian distribution with route-invariant (universal) location and scale parameters. We develop efficient inference methods for such parameters, with which we propose asymptotic universal confidence and prediction intervals of travel time. We further develop our theory to include road-segment level information to construct route-specific location and scale parameter sequences that produce tighter route-specific Gaussian-based prediction intervals. We illustrate our methods with a real-world case study using precollected mobile GPS data, where we show that the route-specific and route-invariant intervals both achieve the 95\% theoretical coverage levels, where the former result in tighter bounds that also outperform competing models.
Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning
Massimo Caccia
Pau Rodriguez
Min Lin
Lucas Caccia
Issam Hadj Laradji
Alexande Lacoste
David Vazquez
IG-RL: Inductive Graph Reinforcement Learning for Massive-Scale Traffic Signal Control
François-Xavier Devailly
Denis Larocque
Scaling adaptive traffic signal control involves dealing with combinatorial state and action spaces. Multi-agent reinforcement learning atte… (see more)mpts to address this challenge by distributing control to specialized agents. However, specialization hinders generalization and transferability, and the computational graphs underlying neural-network architectures—dominating in the multi-agent setting—do not offer the flexibility to handle an arbitrary number of entities which changes both between road networks, and over time as vehicles traverse the network. We introduce Inductive Graph Reinforcement Learning (IG-RL) based on graph-convolutional networks which adapts to the structure of any road network, to learn detailed representations of traffic signal controllers and their surroundings. Our decentralized approach enables learning of a transferable-adaptive-traffic-signal-control policy. After being trained on an arbitrary set of road networks, our model can generalize to new road networks and traffic distributions, with no additional training and a constant number of parameters, enabling greater scalability compared to prior methods. Furthermore, our approach can exploit the granularity of available data by capturing the (dynamic) demand at both the lane level and the vehicle level. The proposed method is tested on both road networks and traffic settings never experienced during training. We compare IG-RL to multi-agent reinforcement learning and domain-specific baselines. In both synthetic road networks and in a larger experiment involving the control of the 3,971 traffic signals of Manhattan, we show that different instantiations of IG-RL outperform baselines.