Portrait of (Rex) Devon Hjelm

(Rex) Devon Hjelm

Affiliate Member
Research Scientist, Apple MLR
Research Topics
Causality
Deep Learning
Generative Models
Information Theory
Online Learning
Probabilistic Models
Reasoning
Reinforcement Learning
Representation Learning

Current Students

PhD - Université de Montréal
Co-supervisor :

Publications

Unsupervised State Representation Learning in Atari
Ankesh Anand
Evan Racah
Sherjil Ozair
Marc-Alexandre Côté
State representation learning, or the ability to capture latent generative factors of an environment, is crucial for building intelligent ag… (see more)ents that can perform a wide variety of tasks. Learning such representations without supervision from rewards is a challenging open problem. We introduce a method that learns state representations by maximizing mutual information across spatially and temporally distinct features of a neural encoder of the observations. We also introduce a new benchmark based on Atari 2600 games where we evaluate representations based on how well they capture the ground truth state variables. We believe this new framework for evaluating representation learning models will be crucial for future representation learning research. Finally, we compare our technique with other state-of-the-art generative and contrastive representation learning methods. The code associated with this work is available at this https URL
Keep Drawing It: Iterative language-based image generation and editing
Alaaeldin El-Nouby
Shikhar Sharma
Hannes Schulz
Layla El Asri
Graham W. Taylor
Conditional text-to-image generation approaches commonly focus on generating a single image in a single step. One practical extension beyond… (see more) one-step generation is an interactive system that generates an image iteratively, conditioned on ongoing linguistic input / feedback. This is significantly more challenging as such a system must understand and keep track of the ongoing context and history. In this work, we present a recurrent image generation model which takes into account both the generated output up to the current step as well as all past instructions for generation. We show that our model is able to generate the background, add new objects, apply simple transformations to existing objects, and correct previous mistakes. We believe our approach is an important step toward interactive generation.
Deep Graph Infomax
Petar Veličković
William Fedus
William L. Hamilton
Pietro Lio
Deep Graph Infomax
Petar Veličković
William Fedus
William L. Hamilton
Pietro Lio
Deep Graph Infomax
Petar Veličković
William Fedus
William L. Hamilton
Pietro Lio
Deep Graph Infomax
Petar Veličković
William Fedus
William L. Hamilton
Pietro Lio
Deep Graph Infomax
Petar Veličković
William Fedus
William L. Hamilton
Pietro Lio
Deep Graph Infomax
Petar Veličković
William Fedus
William L. Hamilton
Pietro Lio
Deep Graph Infomax
Petar Veličković
William Fedus
William L. Hamilton
Pietro Lio
Deep Graph Infomax
Petar Veličković
William Fedus
William L. Hamilton
Pietro Lio
Deep Graph Infomax
Petar Veličković
William Fedus
William L. Hamilton
Pietro Lio
Mutual Information Neural Estimation
Ishmael Belghazi
Aristide Baratin
Sai Rajeswar
Sherjil Ozair
We argue that the estimation of mutual information between high dimensional continuous random variables can be achieved by gradient descent … (see more)over neural networks. We present a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size, trainable through back-prop, and strongly consistent. We present a handful of applications on which MINE can be used to minimize or maximize mutual information. We apply MINE to improve adversarially trained generative models. We also use MINE to implement Information Bottleneck, applying it to supervised classification; our results demonstrate substantial improvement in flexibility and performance in these settings.