Portrait of Blake Richards

Blake Richards

Core Academic Member
Canada CIFAR AI Chair
Associate Professor, McGill University, School of Computer Science and Department of Neurology and Neurosurgery

Biography

Blake Richards is an associate professor at the School of Computer Science and in the Department of Neurology and Neurosurgery at McGill University, and a core academic member of Mila – Quebec Artificial Intelligence Institute.

Richards’ research lies at the intersection of neuroscience and AI. His laboratory investigates universal principles of intelligence that apply to both natural and artificial agents.

He has received several awards for his work, including the NSERC Arthur B. McDonald Fellowship in 2022, the Canadian Association for Neuroscience Young Investigator Award in 2019, and a Canada CIFAR AI Chair in 2018. Richards was a Banting Postdoctoral Fellow at SickKids Hospital from 2011 to 2013.

He obtained his PhD in neuroscience from the University of Oxford in 2010, and his BSc in cognitive science and AI from the University of Toronto in 2004.

Current Students

Independent visiting researcher
PhD - McGill University
Principal supervisor :
Research Intern - McGill University
Collaborating Alumni
Postdoctorate - Université de Montréal
Principal supervisor :
Postdoctorate - Université de Montréal
Principal supervisor :
Master's Research - McGill University
Collaborating researcher - Georgia Tech
PhD - McGill University
Research Intern - McGill University
Postdoctorate - McGill University
Master's Research - McGill University
PhD - McGill University
Principal supervisor :
PhD - McGill University
Postdoctorate - McGill University
PhD - McGill University
Principal supervisor :
Postdoctorate - McGill University
Co-supervisor :
Independent visiting researcher - University of Oregon
Collaborating Alumni
Research Intern - University of Oslo
Master's Research - McGill University

Publications

Beyond accuracy: generalization properties of bio-plausible temporal credit assignment rules
Yuhan Helena Liu
Arna Ghosh
Eric Todd SheaBrown
The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning
Patrick J Mineault
Timothy P. Lillicrap
Christopher C. Pack
The visual system of mammals is comprised of parallel, hierarchical specialized pathways. Different pathways are specialized in so far as th… (see more)ey use representations that are more suitable for supporting specific downstream behaviours. In particular, the clearest example is the specialization of the ventral (“what”) and dorsal (“where”) pathways of the visual cortex. These two pathways support behaviours related to visual recognition and movement, respectively. To-date, deep neural networks have mostly been used as models of the ventral, recognition pathway. However, it is unknown whether both pathways can be modelled with a single deep ANN. Here, we ask whether a single model with a single loss function can capture the properties of both the ventral and the dorsal pathways. We explore this question using data from mice, who like other mammals, have specialized pathways that appear to support recognition and movement behaviours. We show that when we train a deep neural network architecture with two parallel pathways using a self-supervised predictive loss function, we can outperform other models in fitting mouse visual cortex. Moreover, we can model both the dorsal and ventral pathways. These results demonstrate that a self-supervised predictive learning approach applied to parallel pathway architectures can account for some of the functional specialization seen in mammalian visual systems.
From Machine Learning to Robotics: Challenges and Opportunities for Embodied Intelligence
Nicholas Roy
Ingmar Posner
T. Barfoot
Philippe Beaudoin
Jeannette Bohg
Oliver Brock
Isabelle Depatie
Dieter Fox
D. Koditschek
Tom'as Lozano-p'erez
Vikash K. Mansinghka
Dorsa Sadigh
Stefan Schaal
G. Sukhatme
Denis Therien
Marc Emile Toussaint
Michiel van de Panne
Promoting and Optimizing the Use of 3D-Printed Objects in Spontaneous Recognition Memory Tasks in Rodents: A Method for Improving Rigor and Reproducibility
Mehreen Inayat
Arely Cruz-Sanchez
Hayley H. A. Thorpe
Jude A. Frie
Jibran Y. Khokhar
Maithe Arruda-Carvalho
Forgetting Enhances Episodic Control With Structured Memories
Annik Yalnizyan-Carson
Forgetting is a normal process in healthy brains, and evidence suggests that the mammalian brain forgets more than is required based on limi… (see more)tations of mnemonic capacity. Episodic memories, in particular, are liable to be forgotten over time. Researchers have hypothesized that it may be beneficial for decision making to forget episodic memories over time. Reinforcement learning offers a normative framework in which to test such hypotheses. Here, we show that a reinforcement learning agent that uses an episodic memory cache to find rewards in maze environments can forget a large percentage of older memories without any performance impairments, if they utilize mnemonic representations that contain structural information about space. Moreover, we show that some forgetting can actually provide a benefit in performance compared to agents with unbounded memories. Our analyses of the agents show that forgetting reduces the influence of outdated information and states which are not frequently visited on the policies produced by the episodic control system. These results support the hypothesis that some degree of forgetting can be beneficial for decision making, which can help to explain why the brain forgets more than is required by capacity limitations.
Learning function from structure in neuromorphic networks
Laura E. Suárez
Bratislav Mišić
Neocortical inhibitory interneuron subtypes are differentially attuned to synchrony- and rate-coded information
Luke Y. Prince
Matthew M. Tran
Dorian Grey
Lydia Saad
Helen Chasiotis
Jeehyun Kwag
Michael M Kohl
Parallel and Recurrent Cascade Models as a Unifying Force for Understanding Subcellular Computation
Emerson F. Harkin
Peter R. Shen
Anisha Goel
Richard Naud
Neurons are very complicated computational devices, incorporating numerous non-linear processes, particularly in their dendrites. Biophysica… (see more)l models capture these processes directly by explicitly modelling physiological variables, such as ion channels, current flow, membrane capacitance, etc. However, another option for capturing the complexities of real neural computation is to use cascade models, which treat individual neurons as a cascade of linear and non-linear operations, akin to a multi-layer artificial neural network. Recent research has shown that cascade models can capture single-cell computation well, but there are still a number of sub-cellular, regenerative dendritic phenomena that they cannot capture, such as the interaction between sodium, calcium, and NMDA spikes in different compartments. Here, we propose that it is possible to capture these additional phenomena using parallel, recurrent cascade models, wherein an individual neuron is modelled as a cascade of parallel linear and non-linear operations that can be connected recurrently, akin to a multi-layer, recurrent, artificial neural network. Given their tractable mathematical structure, we show that neuron models expressed in terms of parallel recurrent cascades can themselves be integrated into multi-layered artificial neural networks and trained to perform complex tasks. We go on to discuss potential implications and uses of these models for artificial intelligence. Overall, we argue that parallel, recurrent cascade models provide an important, unifying tool for capturing single-cell computation and exploring the algorithmic implications of physiological phenomena.
Parallel and Recurrent Cascade Models as a Unifying Force for Understanding Subcellular Computation
Emerson F. Harkin
Peter R. Shen
Anisha Goel
Richard Naud
Neurons are very complicated computational devices, incorporating numerous non-linear processes, particularly in their dendrites. Biophysica… (see more)l models capture these processes directly by explicitly modelling physiological variables, such as ion channels, current flow, membrane capacitance, etc. However, another option for capturing the complexities of real neural computation is to use cascade models, which treat individual neurons as a cascade of linear and non-linear operations, akin to a multi-layer artificial neural network. Recent research has shown that cascade models can capture single-cell computation well, but there are still a number of sub-cellular, regenerative dendritic phenomena that they cannot capture, such as the interaction between sodium, calcium, and NMDA spikes in different compartments. Here, we propose that it is possible to capture these additional phenomena using parallel, recurrent cascade models, wherein an individual neuron is modelled as a cascade of parallel linear and non-linear operations that can be connected recurrently, akin to a multi-layer, recurrent, artificial neural network. Given their tractable mathematical structure, we show that neuron models expressed in terms of parallel recurrent cascades can themselves be integrated into multi-layered artificial neural networks and trained to perform complex tasks. We go on to discuss potential implications and uses of these models for artificial intelligence. Overall, we argue that parallel, recurrent cascade models provide an important, unifying tool for capturing single-cell computation and exploring the algorithmic implications of physiological phenomena.
PNS-GAN: Conditional Generation of Peripheral Nerve Signals in the Wavelet Domain via Adversarial Networks
Olivier Tessier-Lariviere
Luke Y. Prince
Pascal Fortier-Poisson
Lorenz Wernisch
Oliver Armitage
Emil Hewage
Simulated datasets of neural recordings are a crucial tool in neural engineering for testing the ability of decoding algorithms to recover k… (see more)nown ground-truth. In this work, we introduce PNS-GAN, a generative adversarial network capable of producing realistic nerve recordings conditioned on physiological biomarkers. PNS-GAN operates in the wavelet domain to preserve both the timing and frequency of neural events with high resolution. PNS-GAN generates sequences of scaleograms from noise using a recurrent neural network and 2D transposed convolution layers. PNS-GAN discriminates over stacks of scaleograms with a network of 3D convolution layers. We find that our generated signal reproduces a number of characteristics of the real signal, including similarity in a canonical time-series feature-space, and contains physiologically related neural events including respiration modulation and similar distributions of afferent and efferent signalling.
Parallel inference of hierarchical latent dynamics in two-photon calcium imaging of neuronal populations
Luke Y. Prince
Colleen J Gillon
Dynamic latent variable modelling has provided a powerful tool for understanding how populations of neurons compute. For spiking data, such … (see more)latent variable modelling can treat the data as a set of point-processes, due to the fact that spiking dynamics occur on a much faster timescale than the computational dynamics being inferred. In contrast, for other experimental techniques, the slow dynamics governing the observed data are similar in timescale to the computational dynamics that researchers want to infer. An example of this is in calcium imaging data, where calcium dynamics can have timescales on the order of hundreds of milliseconds. As such, the successful application of dynamic latent variable modelling to modalities like calcium imaging data will rest on the ability to disentangle the deeper- and shallower-level dynamical systems’ contributions to the data. To-date, no techniques have been developed to directly achieve this. Here we solve this problem by extending recent advances using sequential variational autoencoders for dynamic latent variable modelling of neural data. Our system VaLPACa (Variational Ladders for Parallel Autoencoding of Calcium imaging data) solves the problem of disentangling deeper- and shallower-level dynamics by incorporating a ladder architecture that can infer a hierarchy of dynamical systems. Using some built-in inductive biases for calcium dynamics, we show that we can disentangle calcium flux from the underlying dynamics of neural computation. First, we demonstrate with synthetic calcium data that we can correctly disentangle an underlying Lorenz attractor from calcium dynamics. Next, we show that we can infer appropriate rotational dynamics in spiking data from macaque motor cortex after it has been converted into calcium fluorescence data via a calcium dynamics model. Finally, we show that our method applied to real calcium imaging data from primary visual cortex in mice allows us to infer latent factors that carry salient sensory information about unexpected stimuli. These results demonstrate that variational ladder autoencoders are a promising approach for inferring hierarchical dynamics in experimental settings where the measured variable has its own slow dynamics, such as calcium imaging data. Our new, open-source tool thereby provides the neuroscience community with the ability to apply dynamic latent variable modelling to a wider array of data modalities.
Learning to live with Dale's principle: ANNs with separate excitatory and inhibitory units
Jonathan Cornford
Damjan Kalajdzievski
Marco Leite
Amélie Lamarquette
Dimitri Michael Kullmann
The units in artificial neural networks (ANNs) can be thought of as abstractions of biological neurons, and ANNs are increasingly used in ne… (see more)uroscience research. However, there are many important differences between ANN units and real neurons. One of the most notable is the absence of Dale's principle, which ensures that biological neurons are either exclusively excitatory or inhibitory. Dale's principle is typically left out of ANNs because its inclusion impairs learning. This is problematic, because one of the great advantages of ANNs for neuroscience research is their ability to learn complicated, realistic tasks. Here, by taking inspiration from feedforward inhibitory interneurons in the brain we show that we can develop ANNs with separate populations of excitatory and inhibitory units that learn just as well as standard ANNs. We call these networks Dale's ANNs (DANNs). We present two insights that enable DANNs to learn well: (1) DANNs are related to normalization schemes, and can be initialized such that the inhibition centres and standardizes the excitatory activity, (2) updates to inhibitory neuron parameters should be scaled using corrections based on the Fisher Information matrix. These results demonstrate how ANNs that respect Dale's principle can be built without sacrificing learning performance, which is important for future work using ANNs as models of the brain. The results may also have interesting implications for how inhibitory plasticity in the real brain operates.