Portrait of Blake Richards

Blake Richards

Core Academic Member
Canada CIFAR AI Chair
Associate Professor, McGill University, School of Computer Science and Department of Neurology and Neurosurgery

Biography

Blake Richards is an associate professor at the School of Computer Science and in the Department of Neurology and Neurosurgery at McGill University, and a core academic member of Mila – Quebec Artificial Intelligence Institute.

Richards’ research lies at the intersection of neuroscience and AI. His laboratory investigates universal principles of intelligence that apply to both natural and artificial agents.

He has received several awards for his work, including the NSERC Arthur B. McDonald Fellowship in 2022, the Canadian Association for Neuroscience Young Investigator Award in 2019, and a Canada CIFAR AI Chair in 2018. Richards was a Banting Postdoctoral Fellow at SickKids Hospital from 2011 to 2013.

He obtained his PhD in neuroscience from the University of Oxford in 2010, and his BSc in cognitive science and AI from the University of Toronto in 2004.

Current Students

Independent visiting researcher
PhD - McGill University
Principal supervisor :
Research Intern - McGill University
Collaborating Alumni
Postdoctorate - Université de Montréal
Principal supervisor :
Postdoctorate - Université de Montréal
Principal supervisor :
Master's Research - McGill University
Collaborating researcher - Georgia Tech
PhD - McGill University
Research Intern - McGill University
Postdoctorate - McGill University
Master's Research - McGill University
PhD - McGill University
Principal supervisor :
PhD - McGill University
Postdoctorate - McGill University
PhD - McGill University
Principal supervisor :
Postdoctorate - McGill University
Co-supervisor :
Independent visiting researcher - University of Oregon
Collaborating Alumni
Research Intern - University of Oslo
Master's Research - McGill University

Publications

Adversarial Feature Desensitization
Reza Bayat
Adam Ibrahim
Kartik Ahuja
Mojtaba Faramarzi
Touraj Laleh
Neural networks are known to be vulnerable to adversarial attacks -- slight but carefully constructed perturbations of the inputs which can … (see more)drastically impair the network's performance. Many defense methods have been proposed for improving robustness of deep networks by training them on adversarially perturbed inputs. However, these models often remain vulnerable to new types of attacks not seen during training, and even to slightly stronger versions of previously seen attacks. In this work, we propose a novel approach to adversarial robustness, which builds upon the insights from the domain adaptation field. Our method, called Adversarial Feature Desensitization (AFD), aims at learning features that are invariant towards adversarial perturbations of the inputs. This is achieved through a game where we learn features that are both predictive and robust (insensitive to adversarial attacks), i.e. cannot be used to discriminate between natural and adversarial data. Empirical results on several benchmarks demonstrate the effectiveness of the proposed approach against a wide range of attack types and attack strengths. Our code is available at https://github.com/BashivanLab/afd.
Your head is there to move you around: Goal-driven models of the primate dorsal pathway
Patrick J Mineault
Christopher C. Pack
Neurons in the dorsal visual pathway of the mammalian brain are selective for motion stimuli, with the complexity of stimulus representation… (see more)s increasing along the hierarchy. This progression is similar to that of the ventral visual pathway, which is well characterized by artificial neural networks (ANNs) optimized for object recognition. In contrast, there are no image-computable models of the dorsal stream with comparable explanatory power. We hypothesized that the properties of dorsal stream neurons could be explained by a simple learning objective: the need for an organism to orient itself during self-motion. To test this hypothesis, we trained a 3D ResNet to predict an agent’s self-motion parameters from visual stimuli in a simulated environment. We found that the responses in this network accounted well for the selectivity of neurons in a large database of single-neuron recordings from the dorsal visual stream of non-human primates. In contrast, ANNs trained on an action recognition dataset through supervised or self-supervised learning could not explain responses in the dorsal stream, despite also being trained on naturalistic videos with moving objects. These results demonstrate that an ecologically relevant cost function can account for dorsal stream properties in the primate brain.
Different scaling of linear models and deep learning in UKBiobank brain images versus machine-learning datasets
Marc-Andre Schulz
B.T. Thomas Yeo
Joshua T. Vogelstein
Janaina Mourao-Miranada
Jakob N. Kather
Konrad Paul Kording
Distinct roles of parvalbumin and somatostatin interneurons in gating the synchronization of spike times in the neocortex
Hyun Jae Jang
Hyowon Chung
James M. Rowland
Michael M Kohl
Jeehyun Kwag
Sensory information–driven spikes are synchronized across cortical layers by distinct subtypes of interneurons. Synchronization of precise… (see more) spike times across multiple neurons carries information about sensory stimuli. Inhibitory interneurons are suggested to promote this synchronization, but it is unclear whether distinct interneuron subtypes provide different contributions. To test this, we examined single-unit recordings from barrel cortex in vivo and used optogenetics to determine the contribution of parvalbumin (PV)– and somatostatin (SST)–positive interneurons to the synchronization of spike times across cortical layers. We found that PV interneurons preferentially promote the synchronization of spike times when instantaneous firing rates are low (12 Hz), whereas SST interneurons preferentially promote the synchronization of spike times when instantaneous firing rates are high (>12 Hz). Furthermore, using a computational model, we demonstrate that these effects can be explained by PV and SST interneurons having preferential contributions to feedforward and feedback inhibition, respectively. Our findings demonstrate that distinct subtypes of inhibitory interneurons have frequency-selective roles in the spatiotemporal synchronization of precise spike times.
Systems consolidation impairs behavioral flexibility
Sankirthana Sathiyakumar
Sofia Skromne Carrasco
Lydia Saad
Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits
Alexandre Payeur
Jordan Guerguiev
Friedemann Zenke
Richard Naud
Optogenetic activation of parvalbumin and somatostatin interneurons selectively restores theta-nested gamma oscillations and oscillation-induced spike timing-dependent long-term potentiation impaired by amyloid β oligomers
Kyerl Park
Jaedong Lee
Hyun Jae Jang
Michael M Kohl
Jeehyun Kwag
Spike-based causal inference for weight alignment
Jordan Guerguiev
Konrad Paul Kording
In artificial neural networks trained with gradient descent, the weights used for processing stimuli are also used during backward passes to… (see more) calculate gradients. For the real brain to approximate gradients, gradient information would have to be propagated separately, such that one set of synaptic weights is used for processing and another set is used for backward passes. This produces the so-called "weight transport problem" for biological models of learning, where the backward weights used to calculate gradients need to mirror the forward weights used to process stimuli. This weight transport problem has been considered so hard that popular proposals for biological learning assume that the backward weights are simply random, as in the feedback alignment algorithm. However, such random weights do not appear to work well for large networks. Here we show how the discontinuity introduced in a spiking system can lead to a solution to this problem. The resulting algorithm is a special case of an estimator used for causal inference in econometrics, regression discontinuity design. We show empirically that this algorithm rapidly makes the backward weights approximate the forward weights. As the backward weights become correct, this improves learning performance over feedback alignment on tasks such as Fashion-MNIST, SVHN, CIFAR-10 and VOC. Our results demonstrate that a simple learning rule in a spiking network can allow neurons to produce the right backward connections and thus solve the weight transport problem.
Forgetting at biologically realistic levels of neurogenesis in a large-scale hippocampal model
Lina M. Tran
Sheena A. Josselyn
Paul W. Frankland
A deep learning framework for neuroscience
Timothy P. Lillicrap
Philippe Beaudoin
Rafal Bogacz
Amelia Christensen
Claudia Clopath
Rui Ponte Costa
Archy de Berker
Surya Ganguli
Colleen J Gillon
Danijar Hafner
Adam Kepecs
Nikolaus Kriegeskorte
Peter Latham
Grace W. Lindsay
Kenneth D. Miller
Richard Naud
Christopher C. Pack
Panayiota Poirazi … (see 12 more)
Pieter Roelfsema
João Sacramento
Andrew Saxe
Benjamin Scellier
Anna C. Schapiro
Walter Senn
Greg Wayne
Daniel Yamins
Friedemann Zenke
Joel Zylberberg
Denis Therien
Konrad Paul Kording
Dissociating memory accessibility and precision in forgetting
S. Berens
A. Horner
Dendritic solutions to the credit assignment problem
Timothy P. Lillicrap