Portrait of Yoshua Bengio

Yoshua Bengio

Core Academic Member
Canada CIFAR AI Chair
Full Professor, Université de Montréal, Department of Computer Science and Operations Research Department
Founder and Scientific Advisor, Leadership Team
Research Topics
Causality
Computational Neuroscience
Deep Learning
Generative Models
Graph Neural Networks
Machine Learning Theory
Medical Machine Learning
Molecular Modeling
Natural Language Processing
Probabilistic Models
Reasoning
Recurrent Neural Networks
Reinforcement Learning
Representation Learning

Biography

*For media requests, please write to medias@mila.quebec.

For more information please contact Marie-Josée Beauchamp, Administrative Assistant at marie-josee.beauchamp@mila.quebec.

Yoshua Bengio is recognized worldwide as a leading expert in AI. He is most known for his pioneering work in deep learning, which earned him the 2018 A.M. Turing Award, “the Nobel Prize of computing,” with Geoffrey Hinton and Yann LeCun.

Bengio is a full professor at Université de Montréal, and the founder and scientific advisor of Mila – Quebec Artificial Intelligence Institute. He is also a senior fellow at CIFAR and co-directs its Learning in Machines & Brains program, serves as special advisor and founding scientific director of IVADO, and holds a Canada CIFAR AI Chair.

In 2019, Bengio was awarded the prestigious Killam Prize and in 2022, he was the most cited computer scientist in the world by h-index. He is a Fellow of the Royal Society of London, Fellow of the Royal Society of Canada, Knight of the Legion of Honor of France and Officer of the Order of Canada. In 2023, he was appointed to the UN’s Scientific Advisory Board for Independent Advice on Breakthroughs in Science and Technology.

Concerned about the social impact of AI, Bengio helped draft the Montréal Declaration for the Responsible Development of Artificial Intelligence and continues to raise awareness about the importance of mitigating the potentially catastrophic risks associated with future AI systems.

Current Students

Collaborating Alumni - McGill University
Collaborating Alumni - Université de Montréal
Collaborating researcher - Cambridge University
Principal supervisor :
PhD - Université de Montréal
Independent visiting researcher - KAIST
Independent visiting researcher
Co-supervisor :
PhD - Université de Montréal
Collaborating researcher - N/A
Principal supervisor :
PhD - Université de Montréal
Collaborating researcher - KAIST
PhD - Université de Montréal
PhD - Université de Montréal
Research Intern - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
PhD - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Research Intern - Université de Montréal
PhD - Université de Montréal
PhD - Université de Montréal
Principal supervisor :
Collaborating Alumni - Université de Montréal
Postdoctorate - Université de Montréal
Principal supervisor :
Collaborating researcher - Université de Montréal
Collaborating Alumni - Université de Montréal
Collaborating Alumni - Université de Montréal
Postdoctorate - Université de Montréal
Principal supervisor :
Collaborating Alumni - Université de Montréal
Principal supervisor :
Collaborating Alumni
PhD - Université de Montréal
Collaborating Alumni - Université de Montréal
PhD - Université de Montréal
Co-supervisor :
Collaborating researcher - Université de Montréal
PhD - Université de Montréal
Principal supervisor :
PhD - Université de Montréal
Principal supervisor :
Postdoctorate - Université de Montréal
Principal supervisor :
Independent visiting researcher - Université de Montréal
PhD - Université de Montréal
Principal supervisor :
Collaborating researcher - Ying Wu Coll of Computing
PhD - University of Waterloo
Principal supervisor :
Collaborating Alumni - Max-Planck-Institute for Intelligent Systems
Research Intern - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Postdoctorate - Université de Montréal
Independent visiting researcher - Université de Montréal
Postdoctorate - Université de Montréal
PhD - Université de Montréal
Principal supervisor :
Collaborating Alumni - Université de Montréal
Postdoctorate - Université de Montréal
Master's Research - Université de Montréal
Collaborating Alumni - Université de Montréal
Master's Research - Université de Montréal
Postdoctorate
Independent visiting researcher - Technical University of Munich
PhD - Université de Montréal
Co-supervisor :
Postdoctorate - Université de Montréal
Postdoctorate - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Principal supervisor :
Collaborating researcher - Université de Montréal
Collaborating researcher
Research Intern - Université de Montréal
PhD - Université de Montréal
PhD - McGill University
Principal supervisor :
PhD - Université de Montréal
Principal supervisor :
PhD - McGill University
Principal supervisor :

Publications

Training End-to-End Analog Neural Networks with Equilibrium Propagation
Jack D. Kendall
Ross D. Pantone
Kalpana Manickavasagam
Benjamin Scellier
We introduce a principled method to train end-to-end analog neural networks by stochastic gradient descent. In these analog neural networks,… (see more) the weights to be adjusted are implemented by the conductances of programmable resistive devices such as memristors [Chua, 1971], and the nonlinear transfer functions (or `activation functions') are implemented by nonlinear components such as diodes. We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models: they possess an energy function as a consequence of Kirchhoff's laws governing electrical circuits. This property enables us to train them using the Equilibrium Propagation framework [Scellier and Bengio, 2017]. Our update rule for each conductance, which is local and relies solely on the voltage drop across the corresponding resistor, is shown to compute the gradient of the loss function. Our numerical simulations, which use the SPICE-based Spectre simulation framework to simulate the dynamics of electrical circuits, demonstrate training on the MNIST classification task, performing comparably or better than equivalent-size software-based neural networks. Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
Multi-Image Super-Resolution for Remote Sensing using Deep Recurrent Networks
Md Rifat Arefin
Vincent Michalski
Pierre-Luc St-Charles
Alfredo Kalaitzis
Sookyung Kim
High-resolution satellite imagery is critical for various earth observation applications related to environment monitoring, geoscience, fore… (see more)casting, and land use analysis. However, the acquisition cost of such high-quality imagery due to the scarcity of providers and needs for high-frequency revisits restricts its accessibility in many fields. In this work, we present a data-driven, multi-image super resolution approach to alleviate these problems. Our approach is based on an end-to-end deep neural network that consists of an encoder, a fusion module, and a decoder. The encoder extracts co-registered highly efficient feature representations from low-resolution images of a scene. A Gated Re-current Unit (GRU)-based module acts as the fusion module, aggregating features into a combined representation. Finally, a decoder reconstructs the super-resolved image. The proposed model is evaluated on the PROBA-V dataset released in a recent competition held by the European Space Agency. Our results show that it performs among the top contenders and offers a new practical solution for real-world applications.
An Analysis of the Adaptation Speed of Causal Models
Rémi LE PRIOL
Reza Babanezhad Harikandeh
We consider the problem of discovering the causal process that generated a collection of datasets. We assume that all these datasets were ge… (see more)nerated by unknown sparse interventions on a structural causal model (SCM)
COVI White Paper
Hannah Alsdurf
Tristan Deleu
Prateek Gupta
Daphne Ippolito
Richard Janda
Max Jarvie
Tyler J. Kolody
Sekoul Krastev
Robert Obryk
Dan Pilat
Valerie Pisano
Benjamin Prud'homme
Meng Qu
Nasim Rahaman
Jean-franois Rousseau
Abhinav Sharma
Brooke Struck … (see 3 more)
Martin Weiss
Yun William Yu
The SARS-CoV-2 (Covid-19) pandemic has caused significant strain on public health institutions around the world. Contact tracing is an essen… (see more)tial tool to change the course of the Covid-19 pandemic. Manual contact tracing of Covid-19 cases has significant challenges that limit the ability of public health authorities to minimize community infections. Personalized peer-to-peer contact tracing through the use of mobile apps has the potential to shift the paradigm. Some countries have deployed centralized tracking systems, but more privacy-protecting decentralized systems offer much of the same benefit without concentrating data in the hands of a state authority or for-profit corporations. Machine learning methods can circumvent some of the limitations of standard digital tracing by incorporating many clues and their uncertainty into a more graded and precise estimation of infection risk. The estimated risk can provide early risk awareness, personalized recommendations and relevant information to the user. Finally, non-identifying risk data can inform epidemiological models trained jointly with the machine learning predictor. These models can provide statistical evidence for the importance of factors involved in disease transmission. They can also be used to monitor, evaluate and optimize health policy and (de)confinement scenarios according to medical and economic productivity indicators. However, such a strategy based on mobile apps and machine learning should proactively mitigate potential ethical and privacy risks, which could have substantial impacts on society (not only impacts on health but also impacts such as stigmatization and abuse of personal data). Here, we present an overview of the rationale, design, ethical considerations and privacy strategy of `COVI,' a Covid-19 public peer-to-peer contact tracing and risk awareness mobile application developed in Canada.
COVI White Paper
Hannah Alsdurf
Tristan Deleu
Prateek Gupta
Daphne Ippolito
Richard Janda
Max Jarvie
Tyler J. Kolody
Sekoul Krastev
Robert Obryk
Dan Pilat
Valerie Pisano
Benjamin Prud'homme
Meng Qu
Nasim Rahaman
Jean-franois Rousseau
Abhinav Sharma
Brooke Struck … (see 3 more)
Martin Weiss
Yun William Yu
COVI White Paper
Hannah Alsdurf
Tristan Deleu
Prateek Gupta
Daphne Ippolito
Richard Janda
Max Jarvie
Tyler J. Kolody
Sekoul Krastev
Robert Obryk
Dan Pilat
Valerie Pisano
Benjamin Prud'homme
Meng Qu
Nasim Rahaman
Jean-franois Rousseau
Abhinav Sharma
Brooke Struck … (see 3 more)
Martin Weiss
Yun William Yu
Multi-Task Self-Supervised Learning for Robust Speech Recognition
Jianyuan Zhong
Santiago Pascual
Pawel Swietojanski
Joao Monteiro
Jan Trmal
Despite the growing interest in unsupervised learning, extracting meaningful knowledge from unlabelled audio remains an open challenge. To t… (see more)ake a step in this direction, we recently proposed a problem-agnostic speech encoder (PASE), that combines a convolutional encoder followed by multiple neural networks, called workers, tasked to solve self-supervised problems (i.e., ones that do not require manual annotations as ground truth). PASE was shown to capture relevant speech information, including speaker voice-print and phonemes. This paper proposes PASE+, an improved version of PASE for robust speech recognition in noisy and reverberant environments. To this end, we employ an online speech distortion module, that contaminates the input signals with a variety of random disturbances. We then propose a revised encoder that better learns short- and long-term speech dynamics with an efficient combination of recurrent and convolutional networks. Finally, we refine the set of workers used in self-supervision to encourage better cooperation.Results on TIMIT, DIRHA and CHiME-5 show that PASE+ significantly outperforms both the previous version of PASE as well as common acoustic features. Interestingly, PASE+ learns transferable representations suitable for highly mismatched acoustic conditions.
Continual Weight Updates and Convolutional Architectures for Equilibrium Propagation
Maxence Ernoult
Julie Grollier
Damien Querlioz
Benjamin Scellier
Equilibrium Propagation (EP) is a biologically inspired alternative algorithm to backpropagation (BP) for training neural networks. It appli… (see more)es to RNNs fed by a static input x that settle to a steady state, such as Hopfield networks. EP is similar to BP in that in the second phase of training, an error signal propagates backwards in the layers of the network, but contrary to BP, the learning rule of EP is spatially local. Nonetheless, EP suffers from two major limitations. On the one hand, due to its formulation in terms of real-time dynamics, EP entails long simulation times, which limits its applicability to practical tasks. On the other hand, the biological plausibility of EP is limited by the fact that its learning rule is not local in time: the synapse update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically. Our work addresses these two issues and aims at widening the spectrum of EP from standard machine learning models to more bio-realistic neural networks. First, we propose a discrete-time formulation of EP which enables to simplify equations, speed up training and extend EP to CNNs. Our CNN model achieves the best performance ever reported on MNIST with EP. Using the same discrete-time formulation, we introduce Continual Equilibrium Propagation (C-EP): the weights of the network are adjusted continually in the second phase of training using local information in space and time. We show that in the limit of slow changes of synaptic strengths and small nudging, C-EP is equivalent to BPTT (Theorem 1). We numerically demonstrate Theorem 1 and C-EP training on MNIST and generalize it to the bio-realistic situation of a neural network with asymmetric connections between neurons.
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
Miles Brundage
Shahar Avin
Jasmine Wang
Haydn Belfield
Gretchen Krueger
Gillian K. Hadfield
Heidy Khlaaf
Jingying Yang
H. Toner
Ruth Catherine Fong
Pang Wei Koh
Sara Hooker
Jade Leung
Andrew John Trask
Emma Bluemke
Jonathan Lebensbold
Cullen C. O'keefe
Mark Koren
Th'eo Ryffel … (see 39 more)
JB Rubinovitz
Tamay Besiroglu
Federica Carugati
Jack Clark
Peter Eckersley
Sarah de Haas
Maritza L. Johnson
Ben Laurie
Alex Ingerman
Igor Krawczuk
Amanda Askell
Rosario Cammarota
A. Lohn
Charlotte Stix
Peter Mark Henderson
Logan Graham
Carina E. A. Prunkl
Bianca Martin
Elizabeth Seger
Noa Zilberman
Sean O hEigeartaigh
Frens Kroeger
Girish Sastry
R. Kagan
Adrian Weller
Brian Shek-kam Tse
Elizabeth Barnes
Allan Dafoe
Paul D. Scharre
Ariel Herbert-Voss
Martijn Rasser
Shagun Sodhani
Carrick Flynn
Thomas Krendl Gilbert
Lisa Dyer
Saif M. Khan
Markus Anderljung
Establishing an evaluation metric to quantify climate change image realism
Sharon Zhou
Alexandra Luccioni
Gautier Cosne
Michael S. Bernstein
Combating False Negatives in Adversarial Imitation Learning (Student Abstract)
Konrad Żołna
Chitwan Saharia
Léonard Boussioux
David Y. T. Hui
Maxime Chevalier-Boisvert
Pruning for efficient hardware implementations of deep neural networks
Ghouthi Boukli Hacene
Vincent Gripon
Matthieu Arzel
Nicolas Farrugia