Portrait of Guillaume Lajoie

Guillaume Lajoie

Core Academic Member
Canada CIFAR AI Chair
Associate Professor, Université de Montréal, Department of Mathematics and Statistics
Visiting Researcher, Google
Research Topics
AI for Science
AI in Health
Cognition
Computational Neuroscience
Deep Learning
Dynamical Systems
Optimization
Reasoning
Recurrent Neural Networks
Representation Learning

Biography

Guillaume Lajoie is an Associate professor in the Department of Mathematics and Statistics at Université de Montréal and a Core Academic Member of Mila – Quebec Artificial Intelligence Institute. He holds a Canada-CIFAR AI Research Chair, and a Canada Research Chair (CRC) in Neural Computation and Interfacing.

His research is positioned at the intersection of AI and Neuroscience where he develops tools to better understand mechanisms of intelligence common to both biological and artificial systems. His research group's contributions range from advances in multi-scale learning paradigms for large artificial systems, to applications in neurotechnology. Dr. Lajoie is actively involved in responsible AI development efforts, seeking to identify guidelines and best practices for use of AI in research and beyond.

Current Students

Collaborating researcher - ETH Zurich
Independent visiting researcher
Principal supervisor :
PhD - Université de Montréal
Co-supervisor :
Postdoctorate - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Postdoctorate - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Principal supervisor :
PhD - Université de Montréal
Postdoctorate - McGill University
Principal supervisor :
Research Intern - McGill University
Principal supervisor :
Master's Research - Polytechnique Montréal
Principal supervisor :
PhD - Université de Montréal
Independent visiting researcher - McGill University
PhD - Université de Montréal
Co-supervisor :
Master's Research - Université de Montréal
Co-supervisor :
Research Intern - Concordia University
Co-supervisor :
PhD - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Co-supervisor :
Collaborating researcher - Université de Montréal
Collaborating researcher
Principal supervisor :
Collaborating Alumni - McGill University
Principal supervisor :
Master's Research - Université de Montréal
PhD - Université de Montréal
Principal supervisor :
PhD - Université de Montréal
Co-supervisor :
Independent visiting researcher - Champalimeau Institute for the Unknown
Postdoctorate - Université de Montréal
PhD - Université de Montréal

Publications

Learning function from structure in neuromorphic networks
Laura E. Suárez
Bratislav Mišić
Learning Brain Dynamics With Coupled Low-Dimensional Nonlinear Oscillators and Deep Recurrent Networks
Germán Abrevaya
Aleksandr Y. Aravkin
Peng Zheng
Jean-Christophe Gagnon-Audet
James Kozloski
Pablo Polosecki
David Cox
Silvina Ponce Dawson
Guillermo Cecchi
Many natural systems, especially biological ones, exhibit complex multivariate nonlinear dynamical behaviors that can be hard to capture by … (see more)linear autoregressive models. On the other hand, generic nonlinear models such as deep recurrent neural networks often require large amounts of training data, not always available in domains such as brain imaging; also, they often lack interpretability. Domain knowledge about the types of dynamics typically observed in such systems, such as a certain type of dynamical systems models, could complement purely data-driven techniques by providing a good prior. In this work, we consider a class of ordinary differential equation (ODE) models known as van der Pol (VDP) oscil lators and evaluate their ability to capture a low-dimensional representation of neural activity measured by different brain imaging modalities, such as calcium imaging (CaI) and fMRI, in different living organisms: larval zebrafish, rat, and human. We develop a novel and efficient approach to the nontrivial problem of parameters estimation for a network of coupled dynamical systems from multivariate data and demonstrate that the resulting VDP models are both accurate and interpretable, as VDP's coupling matrix reveals anatomically meaningful excitatory and inhibitory interactions across different brain subsystems. VDP outperforms linear autoregressive models (VAR) in terms of both the data fit accuracy and the quality of insight provided by the coupling matrices and often tends to generalize better to unseen data when predicting future brain activity, being comparable to and sometimes better than the recurrent neural networks (LSTMs). Finally, we demonstrate that our (generative) VDP model can also serve as a data-augmentation tool leading to marked improvements in predictive accuracy of recurrent neural networks. Thus, our work contributes to both basic and applied dimensions of neuroimaging: gaining scientific insights and improving brain-based predictive models, an area of potentially high practical importance in clinical diagnosis and neurotechnology.
PNS-GAN: Conditional Generation of Peripheral Nerve Signals in the Wavelet Domain via Adversarial Networks
Olivier Tessier-Lariviere
Luke Y. Prince
Pascal Fortier-Poisson
Lorenz Wernisch
Oliver Armitage
Emil Hewage
Simulated datasets of neural recordings are a crucial tool in neural engineering for testing the ability of decoding algorithms to recover k… (see more)nown ground-truth. In this work, we introduce PNS-GAN, a generative adversarial network capable of producing realistic nerve recordings conditioned on physiological biomarkers. PNS-GAN operates in the wavelet domain to preserve both the timing and frequency of neural events with high resolution. PNS-GAN generates sequences of scaleograms from noise using a recurrent neural network and 2D transposed convolution layers. PNS-GAN discriminates over stacks of scaleograms with a network of 3D convolution layers. We find that our generated signal reproduces a number of characteristics of the real signal, including similarity in a canonical time-series feature-space, and contains physiologically related neural events including respiration modulation and similar distributions of afferent and efferent signalling.
Embedding Signals on Knowledge Graphs with Unbalanced Diffusion Earth Mover's Distance
Alexander Tong
Dennis L. Shung
Amine Natik
Manik Kuchroo
In modern relational machine learning it is common to encounter large graphs that arise via interactions or similarities between observation… (see more)s in many domains. Further
Embedding Signals on Knowledge Graphs with Unbalanced Diffusion Earth Mover's Distance
Alexander Tong
Dennis Shung
Amine Natik
Manik Kuchroo
In modern relational machine learning it is common to encounter large graphs that arise via interactions or similarities between observation… (see more)s in many domains. Further
Gradient Starvation: A Learning Proclivity in Neural Networks
We identify and formalize a fundamental gradient descent phenomenon resulting in a learning proclivity in over-parameterized neural networks… (see more). Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task, despite the presence of other predictive features that fail to be discovered. This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks. Using tools from Dynamical Systems theory, we identify simple properties of learning dynamics during gradient descent that lead to this imbalance, and prove that such a situation can be expected given certain statistical structure in training data. Based on our proposed formalism, we develop guarantees for a novel regularization method aimed at decoupling feature learning dynamics, improving accuracy and robustness in cases hindered by gradient starvation. We illustrate our findings with simple and real-world out-of-distribution (OOD) generalization experiments.
Implicit Regularization in Deep Learning: A View from Function Space
We approach the problem of implicit regularization in deep learning from a geometrical viewpoint. We highlight a possible regularization eff… (see more)ect induced by a dynamical alignment of the neural tangent features introduced by Jacot et al, along a small number of task-relevant directions. By extrapolating a new analysis of Rademacher complexity bounds in linear models, we propose and study a new heuristic complexity measure for neural networks which captures this phenomenon, in terms of sequences of tangent kernel classes along in the learning trajectories.
Implicit Regularization in Deep Learning: A View from Function Space
Untangling tradeoffs between recurrence and self-attention in neural networks
Giancarlo Kerg
Bhargav Kanuparthi
Anirudh Goyal
Kyle Goyette
Attention and self-attention mechanisms, inspired by cognitive processes, are now central to state-of-the-art deep learning on sequential ta… (see more)sks. However, most recent progress hinges on heuristic approaches with limited understanding of attention's role in model optimization and computation, and rely on considerable memory and computational resources that scale poorly. In this work, we present a formal analysis of how self-attention affects gradient propagation in recurrent networks, and prove that it mitigates the problem of vanishing gradients when trying to capture long-term dependencies. Building on these results, we propose a relevancy screening mechanism, inspired by the cognitive process of memory consolidation, that allows for a scalable use of sparse self-attention with recurrence. While providing guarantees to avoid vanishing gradients, we use simple numerical experiments to demonstrate the tradeoffs in performance and computational resources by efficiently balancing attention and recurrence. Based on our results, we propose a concrete direction of research to improve scalability of attentive networks.
Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neural Networks with Attention over Modules
Alex Lamb
Anirudh Goyal
Vikram Voleti
Murray P. Shanahan
Michael Curtis Mozer
Learning Long-term Dependencies Using Cognitive Inductive Biases in Self-attention RNNs
Giancarlo Kerg
Bhargav Kanuparthi
Anirudh Goyal
Kyle Goyette
Attention and self-attention mechanisms, inspired by cognitive processes, are now central to state-of-the-art deep learning on sequential ta… (see more)sks. However, most recent progress hinges on heuristic approaches that rely on considerable memory and computational resources that scale poorly. In this work, we propose a relevancy screening mechanism, inspired by the cognitive process of memory consolidation, that allows for a scalable use of sparse self-attention with recurrence. We use simple numerical experiments to demonstrate that this mechanism helps enable recurrent systems on generalization and transfer learning tasks. Based on our results, we propose a concrete direction of research to improve scalability and generalization of attentive recurrent networks.
Untangling tradeoffs between recurrence and self-attention in artificial neural networks
Giancarlo Kerg
Bhargav Kanuparthi
Anirudh Goyal
Kyle Goyette