Portrait of Liam Paull

Liam Paull

Core Academic Member
Canada CIFAR AI Chair
Assistant Professor, Université de Montréal, Department of Computer Science and Operations Research
Research Topics
Computer Vision
Deep Learning
Robotics

Biography

Liam Paull is an associate professor at Université de Montréal and co-leads the Montréal Robotics and Embodied AI Lab (REAL). His lab focuses on a variety of robotics problems, including building representations of the world for such applications as simultaneous localization and mapping, modelling uncertainty, and building better workflows to teach robotic agents new tasks through, for example, simulation or demonstration.

Previously, Paull was a research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT), where he led the autonomous car project funded by the Toyota Research Institute (TRI). He completed a postdoc with the Marine Robotics Group at MIT, where he worked on Simultaneous Localization and Mapping (SLAM) for underwater robots.

His PhD from the University of New Brunswick in 2013 focused on robust and adaptive planning for underwater vehicles. He is also the co-founder and director of the Duckietown Foundation, which is dedicated to making engaging robotics learning experiences accessible to everyone.

Current Students

Independent visiting researcher - Sapienza
Master's Research - Université de Montréal
Principal supervisor :
Master's Research - Université de Montréal
PhD - Université de Montréal
PhD - Université de Montréal
PhD - Université de Montréal
PhD - Université de Montréal
Co-supervisor :
Collaborating researcher - Université de Montréal
Co-supervisor :
Collaborating researcher
Co-supervisor :
Collaborating Alumni - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
PhD - Université de Montréal
Principal supervisor :
Postdoctorate - Université de Montréal
Collaborating researcher - Université Laval
Master's Research - Université de Montréal
PhD - Université de Montréal
Co-supervisor :
Master's Research - Université de Montréal

Publications

MeshDiffusion: Score-based Generative 3D Mesh Modeling
Zhen Liu
Yao Feng
Michael J. Black
Weiyang Liu
We consider the task of generating realistic 3D shapes, which is useful for a variety of applications such as automatic scene generation and… (see more) physical simulation. Compared to other 3D representations like voxels and point clouds, meshes are more desirable in practice, because (1) they enable easy and arbitrary manipulation of shapes for relighting and simulation, and (2) they can fully leverage the power of modern graphics pipelines which are mostly optimized for meshes. Previous scalable methods for generating meshes typically rely on sub-optimal post-processing, and they tend to produce overly-smooth or noisy surfaces without fine-grained geometric details. To overcome these shortcomings, we take advantage of the graph structure of meshes and use a simple yet very effective generative modeling method to generate 3D meshes. Specifically, we represent meshes with deformable tetrahedral grids, and then train a diffusion model on this direct parameterization. We demonstrate the effectiveness of our model on multiple generative tasks.
Robust and Controllable Object-Centric Learning through Energy-based Models
Ruixiang ZHANG
Tong Che
Boris Ivanovic
Renhao Wang
Marco Pavone
Humans are remarkably good at understanding and reasoning about complex visual scenes. The capability of decomposing low-level observations … (see more)into discrete objects allows us to build a grounded abstract representation and identify the compositional structure of the world. Thus it is a crucial step for machine learning models to be capable of inferring objects and their properties from visual scene without explicit supervision. However, existing works on object-centric representation learning are either relying on tailor-made neural network modules or assuming sophisticated models of underlying generative and inference processes. In this work, we present EGO, a conceptually simple and general approach to learning object-centric representation through energy-based model. By forming a permutation-invariant energy function using vanilla attention blocks that are readily available in Transformers, we can infer object-centric latent variables via gradient-based MCMC methods where permutation equivariance is automatically guaranteed. We show that EGO can be easily integrated into existing architectures, and can effectively extract high-quality object-centric representations, leading to better segmentation accuracy and competitive downstream task performance. We empirically evaluate the robustness of the learned representation from EGO against distribution shift. Finally, we demonstrate the effectiveness of EGO in systematic compositional generalization, by recomposing learned energy functions for novel scene generation and manipulation.
GROOD: GRadient-aware Out-Of-Distribution detection in interpolated manifolds
Mostafa ElAraby
Sabyasachi Sahoo
Yann Batiste Pequignot
Paul Novello
Multi-Agent Reinforcement Learning for Fast-Timescale Demand Response of Residential Loads
Vincent Mai
Philippe Maisonneuve
Tianyu Zhang
Hadi Nekoei
To integrate high amounts of renewable energy resources, electrical power grids must be able to cope with high amplitude, fast timescale var… (see more)iations in power generation. Frequency regulation through demand response has the potential to coordinate temporally flexible loads, such as air conditioners, to counteract these variations. Existing approaches for discrete control with dynamic constraints struggle to provide satisfactory performance for fast timescale action selection with hundreds of agents. We propose a decentralized agent trained with multi-agent proximal policy optimization with localized communication. We explore two communication frameworks: hand-engineered, or learned through targeted multi-agent communication. The resulting policies perform well and robustly for frequency regulation, and scale seamlessly to arbitrary numbers of houses for constant processing times.
Lifelong Topological Visual Navigation
Rey Reza Wiyatno
Anqi Xu
Commonly, learning-based topological navigation approaches produce a local policy while preserving some loose connectivity of the space thro… (see more)ugh a topological map. Nevertheless, spurious or missing edges in the topological graph often lead to navigation failure. In this work, we propose a sampling-based graph building method, which results in sparser graphs yet with higher navigation performance compared to baseline methods. We also propose graph maintenance strategies that eliminate spurious edges and expand the graph as needed, which improves lifelong navigation performance. Unlike controllers that learn from fixed training environments, we show that our model can be fine-tuned using only a small number of collected trajectory images from a real-world environment where the agent is deployed. We demonstrate successful navigation after fine-tuning on real-world environments, and notably show significant navigation improvements over time by applying our lifelong graph maintenance strategies.
Monocular Robot Navigation with Self-Supervised Pretrained Vision Transformers
Miguel Saavedra-Ruiz
Sacha Morin
In this work, we consider the problem of learning a perception model for monocular robot navigation using few annotated images. Using a Visi… (see more)on Transformer (ViT) pretrained with a label-free self-supervised method, we successfully train a coarse image segmentation model for the Duckietown environment using 70 training images. Our model performs coarse image segmentation at the
Monocular Robot Navigation with Self-Supervised Pretrained Vision Transformers
Miguel Saavedra-Ruiz
Sacha Morin
In this work, we consider the problem of learning a perception model for monocular robot navigation using few annotated images. Using a Visi… (see more)on Transformer (ViT) pretrained with a label-free self-supervised method, we successfully train a coarse image segmentation model for the Duckietown environment using 70 training images. Our model performs coarse image segmentation at the
Lifelong Topological Visual Navigation
Rey Reza Wiyatno
Anqi Xu
Commonly, learning-based topological navigation approaches produce a local policy while preserving some loose connectivity of the space thro… (see more)ugh a topological map. Nevertheless, spurious or missing edges in the topological graph often lead to navigation failure. In this work, we propose a sampling-based graph building method, which results in sparser graphs yet with higher navigation performance compared to baseline methods. We also propose graph maintenance strategies that eliminate spurious edges and expand the graph as needed, which improves lifelong navigation performance. Unlike controllers that learn from fixed training environments, we show that our model can be fine-tuned using only a small number of collected trajectory images from a real-world environment where the agent is deployed. We demonstrate successful navigation after fine-tuning on real-world environments, and notably show significant navigation improvements over time by applying our lifelong graph maintenance strategies.
Perceptual Generative Autoencoders
Zijun Zhang
Ruixiang ZHANG
Zongpeng Li
Modern generative models are usually designed to match target distributions directly in the data space, where the intrinsic dimension of dat… (see more)a can be much lower than the ambient dimension. We argue that this discrepancy may contribute to the difficulties in training generative models. We therefore propose to map both the generated and target distributions to a latent space using the encoder of a standard autoencoder, and train the generator (or decoder) to match the target distribution in the latent space. Specifically, we enforce the consistency in both the data space and the latent space with theoretically justified data and latent reconstruction losses. The resulting generative model, which we call a perceptual generative autoencoder (PGA), is then trained with a maximum likelihood or variational autoencoder (VAE) objective. With maximum likelihood, PGAs generalize the idea of reversible generative models to unrestricted neural network architectures and arbitrary number of latent dimensions. When combined with VAEs, PGAs substantially improve over the baseline VAEs in terms of sample quality. Compared to other autoencoder-based generative models using simple priors, PGAs achieve state-of-the-art FID scores on CIFAR-10 and CelebA.
Active Domain Randomization
Bhairav Mehta
Manfred Diaz
Florian Golemo
Domain randomization is a popular technique for improving domain transfer, often used in a zero-shot setting when the target domain is unkno… (see more)wn or cannot easily be used for training. In this work, we empirically examine the effects of domain randomization on agent generalization. Our experiments show that domain randomization may lead to suboptimal, high-variance policies, which we attribute to the uniform sampling of environment parameters. We propose Active Domain Randomization, a novel algorithm that learns a parameter sampling strategy. Our method looks for the most informative environment variations within the given randomization ranges by leveraging the discrepancies of policy rollouts in randomized and reference environment instances. We find that training more frequently on these instances leads to better overall agent generalization. In addition, when domain randomization and policy transfer fail, Active Domain Randomization offers more insight into the deficiencies of both the chosen parameter ranges and the learned policy, allowing for more focused debugging. Our experiments across various physics-based simulated and a real-robot task show that this enhancement leads to more robust, consistent policies.
Curriculum in Gradient-Based Meta-Reinforcement Learning
Bhairav Mehta
Tristan Deleu
Sharath Chandra Raparthy
Gradient-based meta-learners such as Model-Agnostic Meta-Learning (MAML) have shown strong few-shot performance in supervised and reinforcem… (see more)ent learning settings. However, specifically in the case of meta-reinforcement learning (meta-RL), we can show that gradient-based meta-learners are sensitive to task distributions. With the wrong curriculum, agents suffer the effects of meta-overfitting, shallow adaptation, and adaptation instability. In this work, we begin by highlighting intriguing failure cases of gradient-based meta-RL and show that task distributions can wildly affect algorithmic outputs, stability, and performance. To address this problem, we leverage insights from recent literature on domain randomization and propose meta Active Domain Randomization (meta-ADR), which learns a curriculum of tasks for gradient-based meta-RL in a similar as ADR does for sim2real transfer. We show that this approach induces more stable policies on a variety of simulated locomotion and navigation tasks. We assess in- and out-of-distribution generalization and find that the learned task distributions, even in an unstructured task space, greatly improve the adaptation performance of MAML. Finally, we motivate the need for better benchmarking in meta-RL that prioritizes \textit{generalization} over single-task adaption performance.
Your GAN is Secretly an Energy-based Model and You Should use Discriminator Driven Latent Sampling
Tong Che
Ruixiang ZHANG
Jascha Sohl-Dickstein
Yuan Cao
We show that the sum of the implicit generator log-density …