Portrait of Guillaume Lajoie

Guillaume Lajoie

Core Academic Member
Canada CIFAR AI Chair
Associate Professor, Université de Montréal, Department of Mathematics and Statistics
Visiting Researcher, Google
Research Topics
AI for Science
AI in Health
Cognition
Computational Neuroscience
Deep Learning
Dynamical Systems
Optimization
Reasoning
Recurrent Neural Networks
Representation Learning

Biography

Guillaume Lajoie is an Associate professor in the Department of Mathematics and Statistics at Université de Montréal and a Core Academic Member of Mila – Quebec Artificial Intelligence Institute. He holds a Canada-CIFAR AI Research Chair, and a Canada Research Chair (CRC) in Neural Computation and Interfacing.

His research is positioned at the intersection of AI and Neuroscience where he develops tools to better understand mechanisms of intelligence common to both biological and artificial systems. His research group's contributions range from advances in multi-scale learning paradigms for large artificial systems, to applications in neurotechnology. Dr. Lajoie is actively involved in responsible AI development efforts, seeking to identify guidelines and best practices for use of AI in research and beyond.

Current Students

Collaborating researcher - ETH Zurich
Collaborating Alumni - Polytechnique Montréal
Independent visiting researcher
Principal supervisor :
PhD - Université de Montréal
Co-supervisor :
Postdoctorate - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Postdoctorate - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Principal supervisor :
PhD - Université de Montréal
Postdoctorate - McGill University
Principal supervisor :
Master's Research - Polytechnique Montréal
Principal supervisor :
PhD - Université de Montréal
Independent visiting researcher - McGill University
PhD - McGill University
Principal supervisor :
PhD - Université de Montréal
Co-supervisor :
Master's Research - Université de Montréal
Co-supervisor :
PhD - McGill University
Principal supervisor :
Research Intern - Concordia University
Co-supervisor :
PhD - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Co-supervisor :
Collaborating researcher - Université de Montréal
Collaborating researcher
Principal supervisor :
Master's Research - Université de Montréal
Master's Research - Université de Montréal
Principal supervisor :
PhD - Université de Montréal
Principal supervisor :
PhD - Université de Montréal
Co-supervisor :
Postdoctorate - Université de Montréal
PhD - Université de Montréal
Independent visiting researcher - University of South California

Publications

Autonomous optimization of neuroprosthetic stimulation parameters that drive the motor cortex and spinal cord outputs in rats and monkeys
Sandrine L. Côté
Elena Massai
Parikshat Sirpal
Stephan Quessy
Marina Martinez
Numa Dancause
Neural stimulation can alleviate paralysis and sensory deficits. Novel high-density neural interfaces can enable refined and multipronged ne… (see more)urostimulation interventions. To achieve this, it is essential to develop algorithmic frameworks capable of handling optimization in large parameter spaces. Here, we leveraged an algorithmic class, Gaussian-process (GP)-based Bayesian optimization (BO), to solve this problem. We show that GP-BO efficiently explores the neurostimulation space, outperforming other search strategies after testing only a fraction of the possible combinations. Through a series of real-time multi-dimensional neurostimulation experiments, we demonstrate optimization across diverse biological targets (brain, spinal cord), animal models (rats, non-human primates), in healthy subjects, and in neuroprosthetic intervention after injury, for both immediate and continual learning over multiple sessions. GP-BO can embed and improve “prior” expert/clinical knowledge to dramatically enhance its performance. These results advocate for broader establishment of learning agents as structural elements of neuroprosthetic design, enabling personalization and maximization of therapeutic effectiveness.
Neural manifolds and learning regimes in neural-interface tasks
Neural activity tends to reside on manifolds whose dimension is lower than the dimension of the whole neural state space. Experiments using … (see more)brain-computer interfaces (BCIs) with microelectrode arrays implanted in the motor cortex of nonhuman primates have provided ways to test whether neural manifolds influence learning-related neural computations. Starting from a learned BCI-controlled motor task, these experiments explored the effect of changing the BCI decoder to implement perturbations that were either “aligned” or not with the pre-existing neural manifold. In a series of studies, researchers found that within-manifold perturbations (WMPs) evoked fast reassociations of existing neural patterns for rapid adaptation, while outside-manifold perturbations (OMPs) triggered a slower adaptation process that led to the emergence of new neural patterns. Together, these findings have been interpreted as suggesting that these different rates of adaptation might be associated with distinct learning mechanisms. Here, we investigated whether gradient-descent learning could alone explain these differences. Using an idealized model that captures the fixed-point dynamics of recurrent neural networks, we uncovered gradient-based learning dynamics consistent with experimental findings. Crucially, this experimental match arose only when the network was initialized in a lazier learning regime, a concept inherited from deep learning theory. A lazy learning regime—in contrast with a rich regime—implies small changes on synaptic strengths throughout learning. For OMPs, these small changes were less effective at increasing performance and could lead to unstable adaptation with a heightened sensitivity to learning rates. For WMPs, they helped reproduce the reassociation mechanism on short adaptation time scales, especially with large input variances. Since gradient descent has many biologically plausible variants, our findings establish lazy gradient-based learning as a plausible mechanism for adaptation under network-level constraints and unify several experimental results from the literature.
Transfer Entropy Bottleneck: Learning Sequence to Sequence Information Transfer
Pascal Fortier-Poisson
Blake Aaron Richards
When presented with a data stream of two statistically dependent variables, predicting the future of one of the variables (the target stream… (see more)) can benefit from information about both its history and the history of the other variable (the source stream). For example, fluctuations in temperature at a weather station can be predicted using both temperatures and barometric readings. However, a challenge when modelling such data is that it is easy for a neural network to rely on the greatest joint correlations within the target stream, which may ignore a crucial but small information transfer from the source to the target stream. As well, there are often situations where the target stream may have previously been modelled independently and it would be useful to use that model to inform a new joint model. Here, we develop an information bottleneck approach for conditional learning on two dependent streams of data. Our method, which we call Transfer Entropy Bottleneck (TEB), allows one to learn a model that bottlenecks the directed information transferred from the source variable to the target variable, while quantifying this information transfer within the model. As such, TEB provides a useful new information bottleneck approach for modelling two statistically dependent streams of data in order to make predictions about one of them.
Use of Invasive Brain-Computer Interfaces in Pediatric Neurosurgery: Technical and Ethical Considerations
David Bergeron
Christian Iorio-Morin
Nathalie Orr Gaucher
Éric Racine
Alexander G. Weil
Steerable Equivariant Representation Learning
Willie McClinton
Tongzhou Wang
Chen Sun
Phillip Isola
Dilip Krishnan
Pre-trained deep image representations are useful for post-training tasks such as classification through transfer learning, image retrieval,… (see more) and object detection. Data augmentations are a crucial aspect of pre-training robust representations in both supervised and self-supervised settings. Data augmentations explicitly or implicitly promote invariance in the embedding space to the input image transformations. This invariance reduces generalization to those downstream tasks which rely on sensitivity to these particular data augmentations. In this paper, we propose a method of learning representations that are instead equivariant to data augmentations. We achieve this equivariance through the use of steerable representations. Our representations can be manipulated directly in embedding space via learned linear maps. We demonstrate that our resulting steerable and equivariant representations lead to better performance on transfer learning and robustness: e.g. we improve linear probe top-1 accuracy by between 1% to 3% for transfer; and ImageNet-C accuracy by upto 3.4%. We further show that the steerability of our representations provides significant speedup (nearly 50x) for test-time augmentations; by applying a large number of augmentations for out-of-distribution detection, we significantly improve OOD AUC on the ImageNet-C dataset over an invariant representation.
How Gradient Estimator Variance and Bias Could Impact Learning in Neural Circuits
Yuhan Helena Liu
Konrad Kording
Blake A. Richards
Reliability of CKA as a Similarity Measure in Deep Learning
Comparing learned neural representations in neural networks is a challenging but important problem, which has been approached in different w… (see more)ays. The Centered Kernel Alignment (CKA) similarity metric, particularly its linear variant, has recently become a popular approach and has been widely used to compare representations of a network's different layers, of architecturally similar networks trained differently, or of models with different architectures trained on the same data. A wide variety of conclusions about similarity and dissimilarity of these various representations have been made using CKA. In this work we present analysis that formally characterizes CKA sensitivity to a large class of simple transformations, which can naturally occur in the context of modern machine learning. This provides a concrete explanation of CKA sensitivity to outliers, which has been observed in past works, and to transformations that preserve the linear separability of the data, an important generalization attribute. We empirically investigate several weaknesses of the CKA similarity metric, demonstrating situations in which it gives unexpected or counter-intuitive results. Finally we study approaches for modifying representations to maintain functional behaviour while changing the CKA value. Our results illustrate that, in many cases, the CKA value can be easily manipulated without substantial changes to the functional behaviour of the models, and call for caution when leveraging activation alignment metrics.
« Que notre cerveau soit constitué de neurones n’est pas un accident »
Roman Ikonicoff
Formalizing locality for normative synaptic plasticity models
LEAD: Min-Max Optimization from a Physical Perspective
Adversarial formulations have rekindled interest in two-player min-max games. A central obstacle in the optimization of such games is the ro… (see more)tational dynamics that hinder their convergence. In this paper, we show that game optimization shares dynamic properties with particle systems subject to multiple forces, and one can leverage tools from physics to improve optimization dynamics. Inspired by the physical framework, we propose LEAD, an optimizer for min-max games. Next, using Lyapunov stability theory from dynamical systems as well as spectral analysis, we study LEAD’s convergence properties in continuous and discrete time settings for a class of quadratic min-max games to demonstrate linear convergence to the Nash equilibrium. Finally, we empirically evaluate our method on synthetic setups and CIFAR-10 image generation to demonstrate improvements in GAN training.
Multi-view manifold learning of human brain-state trajectories.
Erica L. Busch
Andrew Benz
Tom Wallenstein
Nicholas B. Turk-Browne
The complexity of the human brain gives the illusion that brain activity is intrinsically high-dimensional. Nonlinear dimensionality-reducti… (see more)on methods such as uniform manifold approximation and t-distributed stochastic neighbor embedding have been used for high-throughput biomedical data. However, they have not been used extensively for brain activity data such as those from functional magnetic resonance imaging (fMRI), primarily due to their inability to maintain dynamic structure. Here we introduce a nonlinear manifold learning method for time-series data—including those from fMRI—called temporal potential of heat-diffusion for affinity-based transition embedding (T-PHATE). In addition to recovering a low-dimensional intrinsic manifold geometry from time-series data, T-PHATE exploits the data’s autocorrelative structure to faithfully denoise and unveil dynamic trajectories. We empirically validate T-PHATE on three fMRI datasets, showing that it greatly improves data visualization, classification, and segmentation of the data relative to several other state-of-the-art dimensionality-reduction benchmarks. These improvements suggest many potential applications of T-PHATE to other high-dimensional datasets of temporally diffuse processes.
NEURAL MANIFOLDS AND GRADIENT-BASED ADAPTATION IN NEURAL-INTERFACE TASKS
. Neural activity tends to reside on manifolds whose dimension is much lower than the dimension of the whole neural state space. Experiments… (see more) using brain-computer interfaces with microelectrode arrays implanted in the motor cortex of nonhuman primates tested the hypothesis that external perturbations should produce different adaptation strategies depending on how “aligned” the perturbation is with respect to a pre-existing intrinsic manifold. On the one hand, perturbations within the manifold (WM) evoked fast reassociations of existing patterns for rapid adaptation. On the other hand, perturbations outside the manifold (OM) triggered the slow emergence of new neural patterns underlying a much slower—and, without adequate training protocols, inconsistent or virtually impossible—adaptation. This suggests that the time scale and the overall difficulty of the brain to adapt depend fundamentally on the structure of neural activity. Here, we used a simplified static Gaussian model to show that gradient-descent learning could explain the differences between adaptation to WM and OM perturbations. For small learning rates, we found that the adaptation speeds were different but the model eventually adapted to both perturbations. Moreover, sufficiently large learning rates could entirely prohibit adaptation to OM perturbations while preserving adaptation to WM perturbations, in agreement with experiments. Adopting an incremental training protocol, as has been done in experiments, permitted a swift recovery of a full adaptation in the cases where OM perturbations were previously impossible to relearn. Finally, we also found that gradient descent was compatible with the reassociation mechanism on short adaptation time scales. Since gradient descent has many biologically plausible variants, our findings thus establish gradient-based learning as a plausible mechanism for adaptation under network-level constraints, with a central role for the learning rate.