Portrait de Laurent Charlin

Laurent Charlin

Membre académique principal
Chaire en IA Canada-CIFAR
Professeur agrégé, HEC Montréal, Département de Sciences de la décision
Professeur associé, Université de Montréal, Département d'informatique et de recherche opérationnelle
Directeur scientifique par intérim, Équipe de direction
Sujets de recherche
Apprentissage de représentations
Apprentissage par renforcement
Apprentissage profond
Exploration des données
IA pour la science
Modèles génératifs
Modèles probabilistes
Recherche d'information
Réseaux de neurones en graphes
Systèmes de recommandation
Traitement du langage naturel

Biographie

Laurent Charlin est Directeur scientifique par intérim à Mila – Institut québécois d’intelligence artificielle, titulaire d’une chaire en IA Canada-CIFAR et professeur agrégé à HEC Montréal. Il est également membre principal à Mila.

Ses recherches portent sur le développement de nouveaux modèles d'apprentissage automatique pour aider à la prise de décision. Ses travaux récents concernent l'apprentissage à partir de données qui évoluent dans le temps. Il travaille également sur des applications dans des domaines tels que les systèmes de recommandation et l'optimisation.

Il est l'auteur de publications très citées sur les systèmes de dialogue (chatbots). Laurent Charlin a codéveloppé le Toronto Paper Matching System (TPMS), qui a été largement utilisé dans les conférences d'informatique pour faire correspondre les réviseur·euse·s aux articles. Il a également contribué à plusieurs MOOC récents, et a donné des conférences d'introduction et des interviews dans les médias pour contribuer au transfert de connaissances et améliorer la culture de l'IA.

Étudiants actuels

Maîtrise recherche - HEC
Postdoctorat - HEC
Co-superviseur⋅e :
Maîtrise recherche - HEC
Doctorat - UdeM
Doctorat - UdeM
Co-superviseur⋅e :
Maîtrise recherche - HEC
Doctorat - Université Laval
Superviseur⋅e principal⋅e :
Doctorat - UdeM
Co-superviseur⋅e :
Doctorat - Concordia
Superviseur⋅e principal⋅e :
Collaborateur·rice alumni - UdeM
Postdoctorat - HEC
Co-superviseur⋅e :
Doctorat - UdeM
Doctorat - UdeM

Publications

How to Train Your LLM Web Agent: A Statistical Diagnosis
Dheeraj Vattikonda
Santhoshi Ravichandran
Emiliano Penaloza
Hadi Nekoei
Megh Thakkar
Thibault Le Sellier de Chezelles
Nicolas Gontier
Miguel Muñoz-Mármol
Sahar Omidi Shayegan
Stefania Raimondo
Alexandre Piché
Alexandre Lacoste
Massimo Caccia
Large language model (LLM) agents for web interfaces have advanced rapidly, yet open-source systems still lag behind proprietary agents. Bri… (voir plus)dging this gap is key to enabling customizable, efficient, and privacy-preserving agents. Two challenges hinder progress: the reproducibility issues in RL and LLM agent training, where results often depend on sensitive factors like seeds and decoding parameters, and the focus of prior work on single-step tasks, overlooking the complexities of web-based, multi-step decision-making. We address these gaps by providing a statistically driven study of training LLM agents for web tasks. Our two-stage pipeline combines imitation learning from a Llama 3.3 70B teacher with on-policy fine-tuning via Group Relative Policy Optimization (GRPO) on a Llama 3.1 8B student. Through 240 configuration sweeps and rigorous bootstrapping, we chart the first compute allocation curve for open-source LLM web agents. Our findings show that dedicating one-third of compute to teacher traces and the rest to RL improves MiniWoB++ success by 6 points and closes 60% of the gap to GPT-4o on WorkArena, while cutting GPU costs by 45%. We introduce a principled hyperparameter sensitivity analysis, offering actionable guidelines for robust and cost-effective agent training.
Addressing Concept Mislabeling in Concept Bottleneck Models Through Preference Optimization
Emiliano Penaloza
Tianyue H. Zhang
Mateo Espinosa Zarlenga
LitLLMs, LLMs for Literature Review: Are we there yet?
Shubham Agarwal
Gaurav Sahu
Abhay Puri
Issam Hadj Laradji
Krishnamurthy Dj Dvijotham
Jason Stanley
PREFERENCE OPTIMIZATION FOR CONCEPT BOTTLENECK MODELS
Emiliano Penaloza
Tianyue H. Zhang
Mateo Espinosa Zarlenga
Concept Bottleneck Models (CBMs) propose to enhance the trustworthiness of AI systems by constraining their decisions on a set of human-unde… (voir plus)rstandable concepts. However, CBMs typically assume that datasets contain accurate concept labels—an assumption often violated in practice, which we show can significantly degrade performance (by 25% in some cases). To address this, we introduce the Concept Preference Optimization (CPO) objective, a new loss function based on Direct Preference Optimization, which effectively mitigates the negative impact of concept mislabeling on CBM performance. We provide an analysis of some key properties of the CPO objective showing it directly optimizes for the concept’s posterior distribution, and contrast it against Binary Cross Entropy (BCE) where we show CPO is inherently less sensitive to concept noise. We empirically confirm our analysis finding that CPO consistently outperforms BCE in three real-world datasets with and without added label noise.
PREFERENCE OPTIMIZATION FOR CONCEPT BOTTLENECK MODELS
Emiliano Penaloza
Tianyue H. Zhang
Mateo Espinosa Zarlenga
Concept Bottleneck Models (CBMs) propose to enhance the trustworthiness of AI systems by constraining their decisions on a set of human-unde… (voir plus)rstandable concepts. However, CBMs typically assume that datasets contain accurate concept labels—an assumption often violated in practice, which we show can significantly degrade performance (by 25% in some cases). To address this, we introduce the Concept Preference Optimization (CPO) objective, a new loss function based on Direct Preference Optimization, which effectively mitigates the negative impact of concept mislabeling on CBM performance. We provide an analysis of some key properties of the CPO objective showing it directly optimizes for the concept’s posterior distribution, and contrast it against Binary Cross Entropy (BCE) where we show CPO is inherently less sensitive to concept noise. We empirically confirm our analysis finding that CPO consistently outperforms BCE in three real-world datasets with and without added label noise.
Integrating Present and Past in Unsupervised Continual Learning
Yipeng Zhang
Richard Zemel
Mengye Ren
We formulate a unifying framework for *unsupervised continual learning (UCL)*, which disentangles learning objectives that are specific to t… (voir plus)he present and the past data, encompassing *stability*, *plasticity*, and *cross-task consolidation*. The framework reveals that many existing UCL approaches overlook cross-task consolidation and try to balance plasticity and stability in a shared embedding space. This results in worse performance due to a lack of within-task data diversity and reduced effectiveness in learning the current task. Our method, *Osiris*, which explicitly optimizes all three objectives on separate embedding spaces, achieves state-of-the-art performance on all benchmarks, including two novel ones proposed in this paper featuring semantically structured task sequences. Finally, we show some preliminary evidence that continual models can benefit from such more realistic learning scenarios.
TEARS: Text Representations for Scrutable Recommendations
Emiliano Penaloza
Olivier Gouvert
Haolun Wu
Traditional recommender systems rely on high-dimensional (latent) embeddings for modeling user-item interactions, often resulting in opaque … (voir plus)representations that lack interpretability. Moreover, these systems offer limited control to users over their recommendations. Inspired by recent work, we introduce TExtuAl Representations for Scrutable recommendations (TEARS) to address these challenges. Instead of representing a user’s interests through latent embed- dings, TEARS encodes them in natural text, providing transparency and allowing users to edit them. To encode such preferences, we use modern LLMs to generate high-quality user summaries which we find uniquely capture user preferences. Using these summaries we take a hybrid approach where we use an optimal transport procedure to align the summaries’ representations with the repre- sentation of a standard VAE for collaborative filtering. We find this approach can surpass the performance of the three popular VAE models while providing user-controllable recommendations. We further analyze the controllability of TEARS through three simu- lated user tasks to evaluate the effectiveness of user edits on their summaries. Our code and all user-summaries can be seen in an anonymized repository.
LitLLMs, LLMs for Literature Review: Are we there yet?
Shubham Agarwal
Gaurav Sahu
Abhay Puri
Issam Hadj Laradji
Krishnamurthy Dj Dvijotham
Jason Stanley
LLMs for Literature Review: Are we there yet?
Shubham Agarwal
Gaurav Sahu
Abhay Puri
Issam Hadj Laradji
Krishnamurthy Dj Dvijotham
Jason Stanley
Literature reviews are an essential component of scientific research, but they remain time-intensive and challenging to write, especially du… (voir plus)e to the recent influx of research papers. This paper explores the zero-shot abilities of recent Large Language Models (LLMs) in assisting with the writing of literature reviews based on an abstract. We decompose the task into two components: 1. Retrieving related works given a query abstract, and 2. Writing a literature review based on the retrieved results. We analyze how effective LLMs are for both components. For retrieval, we introduce a novel two-step search strategy that first uses an LLM to extract meaningful keywords from the abstract of a paper and then retrieves potentially relevant papers by querying an external knowledge base. Additionally, we study a prompting-based re-ranking mechanism with attribution and show that re-ranking doubles the normalized recall compared to naive search methods, while providing insights into the LLM's decision-making process. In the generation phase, we propose a two-step approach that first outlines a plan for the review and then executes steps in the plan to generate the actual review. To evaluate different LLM-based literature review methods, we create test sets from arXiv papers using a protocol designed for rolling use with newly released LLMs to avoid test set contamination in zero-shot evaluations. We release this evaluation protocol to promote additional research and development in this regard. Our empirical results suggest that LLMs show promising potential for writing literature reviews when the task is decomposed into smaller components of retrieval and planning. Further, we demonstrate that our planning-based approach achieves higher-quality reviews by minimizing hallucinated references in the generated review by 18-26% compared to existing simpler LLM-based generation methods.
LLMs for Literature Review: Are we there yet?
Shubham Agarwal
Gaurav Sahu
Abhay Puri
Issam Hadj Laradji
Krishnamurthy Dj Dvijotham
Jason Stanley
Literature reviews are an essential component of scientific research, but they remain time-intensive and challenging to write, especially du… (voir plus)e to the recent influx of research papers. This paper explores the zero-shot abilities of recent Large Language Models (LLMs) in assisting with the writing of literature reviews based on an abstract. We decompose the task into two components: 1. Retrieving related works given a query abstract, and 2. Writing a literature review based on the retrieved results. We analyze how effective LLMs are for both components. For retrieval, we introduce a novel two-step search strategy that first uses an LLM to extract meaningful keywords from the abstract of a paper and then retrieves potentially relevant papers by querying an external knowledge base. Additionally, we study a prompting-based re-ranking mechanism with attribution and show that re-ranking doubles the normalized recall compared to naive search methods, while providing insights into the LLM's decision-making process. In the generation phase, we propose a two-step approach that first outlines a plan for the review and then executes steps in the plan to generate the actual review. To evaluate different LLM-based literature review methods, we create test sets from arXiv papers using a protocol designed for rolling use with newly released LLMs to avoid test set contamination in zero-shot evaluations. We release this evaluation protocol to promote additional research and development in this regard. Our empirical results suggest that LLMs show promising potential for writing literature reviews when the task is decomposed into smaller components of retrieval and planning. Further, we demonstrate that our planning-based approach achieves higher-quality reviews by minimizing hallucinated references in the generated review by 18-26% compared to existing simpler LLM-based generation methods.
Towards Modular LLMs by Building and Reusing a Library of LoRAs
Oleksiy Ostapenko
Zhan Su
Edoardo Ponti
Matheus Pereira
Lucas Caccia
Learning to Design Data-structures: A Case Study of Nearest Neighbor Search
Omar Salemohamed
Vatsal Sharan
Shivam Garg
Gregory Valiant
We propose a general framework for automating data-structure design and apply it to the problem of nearest neighbor search. Our model adapts… (voir plus) to the underlying data distribution and provides fine-grained control over query and space complexity, enabling the discovery of solutions tailored to problem-specific constraints. We are able to reverse-engineer learned algorithms in several settings. In 1D, the model discovers optimal distribution (in)dependent algorithms such as binary search and variants of interpolation search. In higher dimensions, the model learns solutions that resemble K-d trees in some regimes, while in others, have elements of locality-sensitive hashing.