Portrait de Hugo Larochelle

Hugo Larochelle

Membre industriel principal
Professeur associé, Université de Montréal, Département d'informatique et de recherche opérationnelle
Chercheur scientifique
Directeur scientifique, Équipe de direction
Sujets de recherche
Apprentissage profond

Biographie

Hugo Larochelle est un chercheur pionnier en apprentissage profond, leader industriel et philanthrope.

Il a commencé son parcours académique auprès de deux des « Pères fondateurs » de l'intelligence artificielle : Yoshua Bengio, son directeur de thèse à l'Université de Montréal, et Geoffrey Hinton, son superviseur postdoctoral à l'Université de Toronto.

Au fil des ans, ses recherches ont mené à plusieurs découvertes majeures présentes dans les systèmes d'IA modernes. Ses travaux sur les auto-encodeurs débruiteurs (denoising autoencoders) ont identifié la reconstruction de données brutes à partir de versions corrompues comme un paradigme clé pour l'apprentissage de représentations abstraites utiles à partir de grandes quantités de données non étiquetées. Avec des modèles tels que l'estimateur de distribution autorégressif neuronal (neural autoregressive distribution estimator) et l'auto-encodeur masqué pour l'estimation de distribution (masked autoencoder distribution estimator), il a contribué à populariser la modélisation autorégressive avec des réseaux de neurones, un paradigme aujourd'hui omniprésent dans l'IA générative. Ses travaux sur l'apprentissage de nouvelles tâches sans données (Zero-Data Learning of New Tasks) ont introduit pour la première fois le concept aujourd'hui courant d'apprentissage zero-shot.

Il a ensuite transposé son expertise académique à l'industrie en cofondant la startup Whetlab, qui a été rachetée par Twitter en 2015. Après avoir travaillé chez Twitter Cortex, il a été recruté pour diriger le laboratoire de recherche en IA de Google à Montréal (Google Brain), maintenant intégré à Google DeepMind. Il est professeur associé à l'Université de Montréal où il mentore la prochaine génération de chercheuses et chercheurs en IA. Il a également développé une série de cours en ligne gratuits sur l’apprentissage automatique.

Père de quatre enfants, Hugo Larochelle et sa conjointe, Angèle St-Pierre, ont également fait de multiples dons à l'Université de Montréal, à l'Université de Sherbrooke (où il a été professeur) et l’Université Laval pour soutenir les étudiantes et étudiants et faire avancer la recherche, particulièrement dans le domaine de l'IA pour l’environnement. Il a également initié la conférence TechAide, qui mobilise la communauté technologique de Montréal pour amasser des fonds pour Centraide, soutenant ainsi la mission de l'organisme de bienfaisance de lutter contre la pauvreté et l'exclusion sociale.

Étudiants actuels

Doctorat - UdeM
Superviseur⋅e principal⋅e :
Doctorat - UdeM
Superviseur⋅e principal⋅e :
Doctorat - UdeM
Co-superviseur⋅e :

Publications

A Universal Representation Transformer Layer for Few-Shot Image Classification
Li Li
William L. Hamilton
Guodong Long
Jing Jiang
Few-shot classification aims to recognize unseen classes when presented with only a small number of samples. We consider the problem of mult… (voir plus)i-domain few-shot image classification, where unseen classes and examples come from diverse data sources. This problem has seen growing interest and has inspired the development of benchmarks such as Meta-Dataset. A key challenge in this multi-domain setting is to effectively integrate the feature representations from the diverse set of training domains. Here, we propose a Universal Representation Transformer (URT) layer, that meta-learns to leverage universal features for few-shot classification by dynamically re-weighting and composing the most appropriate domain-specific representations. In experiments, we show that URT sets a new state-of-the-art result on Meta-Dataset. Specifically, it achieves top-performance on the highest number of data sources compared to competing methods. We analyze variants of URT and present a visualization of the attention score heatmaps that sheds light on how the model performs cross-domain generalization.
Revisiting Fundamentals of Experience Replay
William Fedus
Prajit Ramachandran
Mark Rowland
Will Dabney
Experience replay is central to off-policy algorithms in deep reinforcement learning (RL), but there remain significant gaps in our understa… (voir plus)nding. We therefore present a systematic and extensive analysis of experience replay in Q-learning methods, focusing on two fundamental properties: the replay capacity and the ratio of learning updates to experience collected (replay ratio). Our additive and ablative studies upend conventional wisdom around experience replay -- greater capacity is found to substantially increase the performance of certain algorithms, while leaving others unaffected. Counterintuitively we show that theoretically ungrounded, uncorrected n-step returns are uniquely beneficial while other techniques confer limited benefit for sifting through larger memory. Separately, by directly controlling the replay ratio we contextualize previous observations in the literature and empirically measure its importance across a variety of deep RL algorithms. Finally, we conclude by testing a set of hypotheses on the nature of these performance benefits.
An Effective Anti-Aliasing Approach for Residual Networks
Cristina Vasconcelos
Vincent Dumoulin
Image pre-processing in the frequency domain has traditionally played a vital role in computer vision and was even part of the standard pipe… (voir plus)line in the early days of deep learning. However, with the advent of large datasets, many practitioners concluded that this was unnecessary due to the belief that these priors can be learned from the data itself. Frequency aliasing is a phenomenon that may occur when sub-sampling any signal, such as an image or feature map, causing distortion in the sub-sampled output. We show that we can mitigate this effect by placing non-trainable blur filters and using smooth activation functions at key locations, particularly where networks lack the capacity to learn them. These simple architectural changes lead to substantial improvements in out-of-distribution generalization on both image classification under natural corruptions on ImageNet-C [10] and few-shot learning on Meta-Dataset [17], without introducing additional trainable parameters and using the default hyper-parameters of open source codebases.
On Catastrophic Interference in Atari 2600 Games
William Fedus
Dibya Ghosh
John D. Martin
Model-free deep reinforcement learning is sample inefficient. One hypothesis -- speculated, but not confirmed -- is that catastrophic interf… (voir plus)erence within an environment inhibits learning. We test this hypothesis through a large-scale empirical study in the Arcade Learning Environment (ALE) and, indeed, find supporting evidence. We show that interference causes performance to plateau; the network cannot train on segments beyond the plateau without degrading the policy used to reach there. By synthetically controlling for interference, we demonstrate performance boosts across architectures, learning algorithms and environments. A more refined analysis shows that learning one segment of a game often increases prediction errors elsewhere. Our study provides a clear empirical link between catastrophic interference and sample efficiency in reinforcement learning.
Language GANs Falling Short
Massimo Caccia
Lucas Caccia
William Fedus
Generating high-quality text with sufficient diversity is essential for a wide range of Natural Language Generation (NLG) tasks. Maximum-Lik… (voir plus)elihood (MLE) models trained with teacher forcing have consistently been reported as weak baselines, where poor performance is attributed to exposure bias (Bengio et al., 2015; Ranzato et al., 2015); at inference time, the model is fed its own prediction instead of a ground-truth token, which can lead to accumulating errors and poor samples. This line of reasoning has led to an outbreak of adversarial based approaches for NLG, on the account that GANs do not suffer from exposure bias. In this work, we make several surprising observations which contradict common beliefs. First, we revisit the canonical evaluation framework for NLG, and point out fundamental flaws with quality-only evaluation: we show that one can outperform such metrics using a simple, well-known temperature parameter to artificially reduce the entropy of the model's conditional distributions. Second, we leverage the control over the quality / diversity trade-off given by this parameter to evaluate models over the whole quality-diversity spectrum and find MLE models constantly outperform the proposed GAN variants over the whole quality-diversity space. Our results have several implications: 1) The impact of exposure bias on sample quality is less severe than previously thought, 2) temperature tuning provides a better quality / diversity trade-off than adversarial training while being easier to train, easier to cross-validate, and less computationally expensive. Code to reproduce the experiments is available at github.com/pclucas14/GansFallingShort
Learning Graph Structure With A Finite-State Automaton Layer
Daniel D. Johnson
Daniel Tarlow
Small-GAN: Speeding Up GAN Training Using Core-sets
Samarth Sinha
Han Zhang
Anirudh Goyal
Augustus Odena
Recent work by Brock et al. (2018) suggests that Generative Adversarial Networks (GANs) benefit disproportionately from large mini-batch siz… (voir plus)es. Unfortunately, using large batches is slow and expensive on conventional hardware. Thus, it would be nice if we could generate batches that were effectively large though actually small. In this work, we propose a method to do this, inspired by the use of Coreset-selection in active learning. When training a GAN, we draw a large batch of samples from the prior and then compress that batch using Coreset-selection. To create effectively large batches of 'real' images, we create a cached dataset of Inception activations of each training image, randomly project them down to a smaller dimension, and then use Coreset-selection on those projected activations at training time. We conduct experiments showing that this technique substantially reduces training time and memory usage for modern GAN variants, that it reduces the fraction of dropped modes in a synthetic dataset, and that it allows GANs to reach a new state of the art in anomaly detection.
Your GAN is Secretly an Energy-based Model and You Should use Discriminator Driven Latent Sampling
Tong Che
Ruixiang ZHANG
Jascha Sohl-Dickstein
Yuan Cao
We show that the sum of the implicit generator log-density …
InfoBot: Structured Exploration in ReinforcementLearning Using Information Bottleneck
Anirudh Goyal
Riashat Islam
D. Strouse
Zafarali Ahmed
Matthew Botvinick
Sergey Levine
InfoBot: Transfer and Exploration via the Information Bottleneck
Anirudh Goyal
Riashat Islam
DJ Strouse
Zafarali Ahmed
Matthew Botvinick
Sergey Levine
A central challenge in reinforcement learning is discovering effective policies for tasks where rewards are sparsely distributed. We postula… (voir plus)te that in the absence of useful reward signals, an effective exploration strategy should seek out {\it decision states}. These states lie at critical junctions in the state space from where the agent can transition to new, potentially unexplored regions. We propose to learn about decision states from prior experience. By training a goal-conditioned policy with an information bottleneck, we can identify decision states by examining where the model actually leverages the goal state. We find that this simple mechanism effectively identifies decision states, even in partially observed settings. In effect, the model learns the sensory cues that correlate with potential subgoals. In new environments, this model can then identify novel subgoals for further exploration, guiding the agent through a sequence of potential decision states and through new regions of the state space.
Recall Traces: Backtracking Models for Efficient Reinforcement Learning
Anirudh Goyal
Philemon Brakel
William Fedus
Soumye Singhal
Timothy P. Lillicrap
Sergey Levine
In many environments only a tiny subset of all states yield high reward. In these cases, few of the interactions with the environment provid… (voir plus)e a relevant learning signal. Hence, we may want to preferentially train on those high-reward states and the probable trajectories leading to them. To this end, we advocate for the use of a backtracking model that predicts the preceding states that terminate at a given high-reward state. We can train a model which, starting from a high value state (or one that is estimated to have high value), predicts and sample for which the (state, action)-tuples may have led to that high value state. These traces of (state, action) pairs, which we refer to as Recall Traces, sampled from this backtracking model starting from a high value state, are informative as they terminate in good states, and hence we can use these traces to improve a policy. We provide a variational interpretation for this idea and a practical algorithm in which the backtracking model samples from an approximate posterior distribution over trajectories which lead to large rewards. Our method improves the sample efficiency of both on- and off-policy RL algorithms across several environments and tasks.
Blindfold Baselines for Embodied QA
We explore blindfold (question-only) baselines for Embodied Question Answering. The EmbodiedQA task requires an agent to answer a question b… (voir plus)y intelligently navigating in a simulated environment, gathering necessary visual information only through first-person vision before finally answering. Consequently, a blindfold baseline which ignores the environment and visual information is a degenerate solution, yet we show through our experiments on the EQAv1 dataset that a simple question-only baseline achieves state-of-the-art results on the EmbodiedQA task in all cases except when the agent is spawned extremely close to the object.