Portrait of Derek Nowrouzezahrai

Derek Nowrouzezahrai

Core Academic Member
Canada CIFAR AI Chair
Associate Professor, McGill University, Department of Electrical and Computer Engineering
Research Topics
Computational Photography
Computer Vision
Deep Learning
Dynamical Systems
Generative Models
Reinforcement Learning
Representation Learning

Biography

Derek Nowrouzezahrai is a full professor at McGill University, where he directs the Centre for Intelligent Machines and co-directs the Graphics Lab.

He is also a Canada CIFAR AI Chair and holds the Ubisoft–Mila research Chair, Scaling Game Worlds with Responsible AI.

Nowrouzezahrai’s research tackles the simulation of various physical phenomena, such as the dynamics of moving objects and the simulation of lighting for realistic image synthesis, which have applications in virtual reality, video games, fluid simulation and control, digital manufacturing, computationally augmented optics and geometry processing. He is also interested in the development of differentiable simulators of these dynamical systems and their applications to inverse problems in robotics and vision.

This work relies fundamentally on developing high performance and sample efficient (Markov chain) Monte Carlo-based methods, high-order statistics and computational methods for complex multi-dimensional integration problems, differentiable physics-based simulators and numerical methods for dynamical systems, and on applying machine learning to 3D, visual and interactive media.

Current Students

PhD - McGill University
Collaborating researcher - McGill University
Co-supervisor :
Master's Research - Université de Montréal
Principal supervisor :
PhD - McGill University
Master's Research - McGill University
PhD - McGill University
PhD - McGill University
Principal supervisor :
PhD - McGill University
PhD - McGill University
PhD - McGill University
Co-supervisor :
Master's Research - McGill University

Publications

Overcoming challenges in leveraging GANs for few-shot data augmentation
Christopher Beckham
Issam Hadj Laradji
Pau Rodriguez
David Vazquez
Robust motion in-betweening
Félix Harvey
Mike Yurick
In this work we present a novel, robust transition generation technique that can serve as a new tool for 3D animators, based on adversarial … (see more)recurrent neural networks. The system synthesises high-quality motions that use temporally-sparse keyframes as animation constraints. This is reminiscent of the job of in-betweening in traditional animation pipelines, in which an animator draws motion frames between provided keyframes. We first show that a state-of-the-art motion prediction model cannot be easily converted into a robust transition generator when only adding conditioning information about future keyframes. To solve this problem, we then propose two novel additive embedding modifiers that are applied at each timestep to latent representations encoded inside the network's architecture. One modifier is a time-to-arrival embedding that allows variations of the transition length with a single model. The other is a scheduled target noise vector that allows the system to be robust to target distortions and to sample different transitions given fixed keyframes. To qualitatively evaluate our method, we present a custom MotionBuilder plugin that uses our trained model to perform in-betweening in production scenarios. To quantitatively evaluate performance on transitions and generalizations to longer time horizons, we present well-defined in-betweening benchmarks on a subset of the widely used Human3.6M dataset and on LaFAN1, a novel high quality motion capture dataset that is more appropriate for transition generation. We are releasing this new dataset along with this work, with accompanying code for reproducing our baseline results.
Pix2Shape: Towards Unsupervised Learning of 3D Scenes from Images Using a View-Based Representation
Sai Rajeswar
Fahim Mannan
Jérôme Parent-Lévesque
David Vazquez
Adversarial Soft Advantage Fitting: Imitation Learning without Policy Optimization
Adversarial imitation learning alternates between learning a discriminator -- which tells apart expert's demonstrations from generated ones … (see more)-- and a generator's policy to produce trajectories that can fool this discriminator. This alternated optimization is known to be delicate in practice since it compounds unstable adversarial training with brittle and sample-inefficient reinforcement learning. We propose to remove the burden of the policy optimization steps by leveraging a novel discriminator formulation. Specifically, our discriminator is explicitly conditioned on two policies: the one from the previous generator's iteration and a learnable policy. When optimized, this discriminator directly learns the optimal generator's policy. Consequently, our discriminator's update solves the generator's optimization problem for free: learning a policy that imitates the expert does not require an additional optimization loop. This formulation effectively cuts by half the implementation and computational burden of adversarial imitation learning algorithms by removing the reinforcement learning phase altogether. We show on a variety of tasks that our simpler approach is competitive to prevalent imitation learning methods.