Portrait of David Meger

David Meger

Associate Academic Member
Associate Professor, McGill University, School of Computer Science
Research Topics
Computer Vision
Reinforcement Learning

Biography

David Meger is an associate professor at McGill University’s School of Computer Science.

He co-directs the Mobile Robotics Lab within the Centre for Intelligent Machines, one of Canada's largest and longest-running robotics research groups. He was the general chair of Canada’s first joint CS-CAN conference in 2023.

Meger's research contributions include visually guided robots powered by active vision and learning, deep reinforcement learning models that are widely cited and used by researchers and industry worldwide, and field robotics that allow for autonomous deployment underwater and on land.

Current Students

Master's Research - McGill University
Collaborating researcher - McGill University
Principal supervisor :
PhD - McGill University
PhD - McGill University
Co-supervisor :
PhD - McGill University
Co-supervisor :
Master's Research - McGill University
Co-supervisor :
Master's Research - McGill University
Co-supervisor :
PhD - McGill University
Principal supervisor :
PhD - McGill University
Master's Research - McGill University
PhD - McGill University
Co-supervisor :
PhD - McGill University

Publications

NeurIPS 2022 Competition: Driving SMARTS
Amir Hossein Rasouli
R. Goebel
Matthew E. Taylor
Iuliia Kotseruba
Soheil Alizadeh
Tianpei Yang
Montgomery Alban
Florian Shkurti
Yuzheng Zhuang
Adam Ścibior
Kasra Rezaee
Animesh Garg
Jun Luo
Weinan Zhang
Xinyu Wang
Xiangshan Chen
Distributional Hamilton-Jacobi-Bellman Equations for Continuous-Time Reinforcement Learning
Why Should I Trust You, Bellman? The Bellman Error is a Poor Replacement for Value Error
Scott Fujimoto
Ofir Nachum
Shixiang Shane Gu
In this work, we study the use of the Bellman equation as a surrogate objective for value prediction accuracy. While the Bellman equation is… (see more) uniquely solved by the true value function over all state-action pairs, we find that the Bellman error (the difference between both sides of the equation) is a poor proxy for the accuracy of the value function. In particular, we show that (1) due to cancellations from both sides of the Bellman equation, the magnitude of the Bellman error is only weakly related to the distance to the true value function, even when considering all state-action pairs, and (2) in the finite data regime, the Bellman equation can be satisfied exactly by infinitely many suboptimal solutions. This means that the Bellman error can be minimized without improving the accuracy of the value function. We demonstrate these phenomena through a series of propositions, illustrative toy examples, and empirical analysis in standard benchmark domains.
Adaptive Confidence Calibration
Jonathan W. Pearce
IL-flOw: Imitation Learning from Observation using Normalizing Flows
Wei-Di Chang
Juan Higuera
Scott Fujimoto
Continuous MDP Homomorphisms and Homomorphic Policy Gradient
Learning Assisted Identification of Scenarios Where Network Optimization Algorithms Under-Perform
Dmitriy Rivkin
Di Wu
X. T. Chen
We present a generative adversarial method that uses deep learning to identify network load traffic conditions in which network optimization… (see more) algorithms under-perform other known algorithms: the Deep Convolutional Failure Generator (DCFG). The spatial distribution of network load presents challenges for network operators for tasks such as load balancing, in which a network optimizer attempts to maintain high quality communication while at the same time abiding capacity constraints. Testing a network optimizer for all possible load distributions is challenging if not impossible. We propose a novel method that searches for load situations where a target network optimization method underperforms baseline, which are key test cases that can be used for future refinement and performance optimization. By modeling a realistic network simulator's quality assessments with a deep network and, in parallel, optimizing a load generation network, our method efficiently searches the high dimensional space of load patterns and reliably finds cases in which a target network optimization method under-performs a baseline by a significant margin.
Active 3D Shape Reconstruction from Vision and Touch
Edward J. Smith
Luis Pineda
Roberto Calandra
Jitendra Malik
Michal Drozdzal
Humans build 3D understandings of the world through active object exploration, using jointly their senses of vision and touch. However, in 3… (see more)D shape reconstruction, most recent progress has relied on static datasets of limited sensory data such as RGB images, depth maps or haptic readings, leaving the active exploration of the shape largely unexplored. In active touch sensing for 3D reconstruction, the goal is to actively select the tactile readings that maximize the improvement in shape reconstruction accuracy. However, the development of deep learning-based active touch models is largely limited by the lack of frameworks for shape exploration. In this paper, we focus on this problem and introduce a system composed of: 1) a haptic simulator leveraging high spatial resolution vision-based tactile sensors for active touching of 3D objects; 2) a mesh-based 3D shape reconstruction model that relies on tactile or visuotactile signals; and 3) a set of data-driven solutions with either tactile or visuotactile priors to guide the shape exploration. Our framework enables the development of the first fully data-driven solutions to active touch on top of learned models for object understanding. Our experiments show the benefits of such solutions in the task of 3D shape understanding where our models consistently outperform natural baselines. We provide our framework as a tool to foster future research in this direction.
Trajectory-Constrained Deep Latent Visual Attention for Improved Local Planning in Presence of Heterogeneous Terrain
Stefan Wapnick
Travis Manderson
We present a reward-predictive, model-based learning method featuring trajectory-constrained visual attention for use in mapless, local visu… (see more)al navigation tasks. Our method learns to place visual attention at locations in latent image space which follow trajectories caused by vehicle control actions to later enhance predictive accuracy during planning. Our attention model is jointly optimized by the task-specific loss and additional trajectory-constraint loss, allowing adaptability yet encouraging a regularized structure for improved generalization and reliability. Importantly, visual attention is applied in latent feature map space instead of raw image space to promote efficient planning. We validated our model in visual navigation tasks of planning low turbulence, collision-free trajectories in off-road settings and hill climbing with locking differentials in the presence of slippery terrain. Experiments involved randomized procedural generated simulation and real-world environments. We found our method improved generalization and learning efficiency when compared to no-attention and self-attention alternatives.
Latent Attention Augmentation for Robust Autonomous Driving Policies
Ran Cheng
Christopher Agia
Florian Shkurti
Model-free reinforcement learning has become a viable approach for vision-based robot control. However, sample complexity and adaptability t… (see more)o domain shifts remain persistent challenges when operating in high-dimensional observation spaces (images, LiDAR), such as those that are involved in autonomous driving. In this paper, we propose a flexible framework by which a policy’s observations are augmented with robust attention representations in the latent space to guide the agent’s attention during training. Our method encodes local and global descriptors of the augmented state representations into a compact latent vector, and scene dynamics are approximated by a recurrent network that processes the latent vectors in sequence. We outline two approaches for constructing attention maps; a supervised pipeline leveraging semantic segmentation networks, and an unsupervised pipeline relying only on classical image processing techniques. We conduct our experiments in simulation and test the learned policy against varying seasonal effects and weather conditions. Our design decisions are supported in a series of ablation studies. The results demonstrate that our state augmentation method both improves learning efficiency and encourages robust domain adaptation when compared to common end-to-end frameworks and methods that learn directly from intermediate representations.
An Autonomous Probing System for Collecting Measurements at Depth from Small Surface Vehicles
Yuying Huang
Yiming Yao
Johanna Hansen
Jeremy Mallette
Sandeep Manjanna
A Deep Reinforcement Learning Approach to Marginalized Importance Sampling with the Successor Representation
Scott Fujimoto
Marginalized importance sampling (MIS), which measures the density ratio between the state-action occupancy of a target policy and that of a… (see more) sampling distribution, is a promising approach for off-policy evaluation. However, current state-of-the-art MIS methods rely on complex optimization tricks and succeed mostly on simple toy problems. We bridge the gap between MIS and deep reinforcement learning by observing that the density ratio can be computed from the successor representation of the target policy. The successor representation can be trained through deep reinforcement learning methodology and decouples the reward optimization from the dynamics of the environment, making the resulting algorithm stable and applicable to high-dimensional domains. We evaluate the empirical performance of our approach on a variety of challenging Atari and MuJoCo environments.