Home

Inspiring the development of artificial intelligence for the benefit of all 

A professor talks to his students in a café/lounge.

Located in the heart of Quebec’s AI ecosystem, Mila is a community of more than 1,200 researchers specializing in machine learning and dedicated to scientific excellence and innovation.

About

Featured
Featured
Featured

Faculty 

Founded in 1993 by Professor Yoshua Bengio, Mila today brings together over 140 professors affiliated with Université de Montréal, McGill University, Polytechnique Montréal and HEC Montréal. Mila also welcomes professors from Université Laval, Université de Sherbrooke, École de technologie supérieure (ÉTS) and Concordia University. 

Browse the online directory

Photo of Yoshua Bengio

Latest Publications

Evaluating machine learning-driven intrusion detection systems in IoT: Performance and energy consumption
Saeid Jamshidi
Kawser Wazed Nafi
Amin Nikanjam
STAMP: Differentiable Task and Motion Planning via Stein Variational Gradient Descent
Yewon Lee
Andrew Zou Li
Yizhou Huang
Philip Huang
Eric Heiden
Krishna Murthy
Fabian Damken
Kevin A. Smith
Fabio Ramos
Florian Shkurti
Carnegie-mellon University
M. I. O. Technology
Technische Universitat Darmstadt
Nvidia
M. University
University of Sydney
Planning for many manipulation tasks, such as using tools or assembling parts, often requires both symbolic and geometric reasoning. Task an… (see more)d Motion Planning (TAMP) algorithms typically solve these problems by conducting a tree search over high-level task sequences while checking for kinematic and dynamic feasibility. While performant, most existing algorithms are highly inefficient as their time complexity grows exponentially with the number of possible actions and objects. Additionally, they only find a single solution to problems in which many feasible plans may exist. To address these limitations, we propose a novel algorithm called Stein Task and Motion Planning (STAMP) that leverages parallelization and differentiable simulation to efficiently search for multiple diverse plans. STAMP relaxes discrete-and-continuous TAMP problems into continuous optimization problems that can be solved using variational inference. Our algorithm builds upon Stein Variational Gradient Descent, a gradient-based variational inference algorithm, and parallelized differentiable physics simulators on the GPU to efficiently obtain gradients for inference. Further, we employ imitation learning to introduce action abstractions that reduce the inference problem to lower dimensions. We demonstrate our method on two TAMP problems and empirically show that STAMP is able to: 1) produce multiple diverse plans in parallel; and 2) search for plans more efficiently compared to existing TAMP baselines.
Ctrl-V: Higher Fidelity Autonomous Vehicle Video Generation with Bounding-Box Controlled Object Motion
Ge Ya Luo
Zhi Hao Luo
Anthony Gosselin
Alexia Jolicoeur-Martineau
Efficient Morphology-Aware Policy Transfer to New Embodiments
Michael Przystupa
Hongyao Tang
Mariano Phielipp
Santiago Miret
Martin Jägersand
Matthew E. Taylor
Morphology-aware policy learning is a means of enhancing policy sample efficiency by aggregating data from multiple agents. These types of p… (see more)olicies have previously been shown to help generalize over dynamic, kinematic, and limb configuration variations between agent morphologies. Unfortunately, these policies still have sub-optimal zero-shot performance compared to end-to-end finetuning on morphologies at deployment. This limitation has ramifications in practical applications such as robotics because further data collection to perform end-to-end finetuning can be computationally expensive. In this work, we investigate combining morphology-aware pretraining with \textit{parameter efficient finetuning} (PEFT) techniques to help reduce the learnable parameters necessary to specialize a morphology-aware policy to a target embodiment. We compare directly tuning sub-sets of model weights, input learnable adapters, and prefix tuning techniques for online finetuning. Our analysis reveals that PEFT techniques in conjunction with policy pre-training generally help reduce the number of samples to necessary to improve a policy compared to training models end-to-end from scratch. We further find that tuning as few as less than 1\% of total parameters will improve policy performance compared the zero-shot performance of the base pretrained a policy.

AI for Humanity

Socially responsible and beneficial development of AI is a fundamental component of Mila’s mission. As a leader in the field, we wish to contribute to social dialogue and the development of applications that will benefit society.

Learn more

A person looks up at a starry sky.