Portrait de Blake Richards

Blake Richards

Membre académique principal
Chaire en IA Canada-CIFAR
Professeur adjoint, McGill University, École d'informatique et Département de neurologie et de neurochirurgie

Biographie

Blake Richards est professeur agrégé à l'École d'informatique et au Département de neurologie et de neurochirurgie de l'Université McGill et membre du corps professoral de Mila – Institut québécois d’intelligence artificielle. Ses recherches se situent à l'intersection des neurosciences et de l'intelligence artificielle. Son laboratoire étudie les principes universels de l'intelligence qui s'appliquent aux agents naturels et artificiels. Il a reçu plusieurs distinctions pour ses travaux, notamment une bourse Arthur-B.-McDonald du Conseil de recherches en sciences naturelles et en génie du Canada (CRSNG) en 2022, le Prix du jeune chercheur de l'Association canadienne des neurosciences en 2019 et une chaire en IA Canada-CIFAR en 2018. M. Richards a en outre été titulaire d'une bourse postdoctorale Banting à l'hôpital SickKids de 2011 à 2013. Il a obtenu un doctorat en neurosciences de l'Université d'Oxford en 2010 et une licence en sciences cognitives et en IA de l'Université de Toronto en 2004.

Étudiants actuels

Visiteur de recherche indépendant
Doctorat - McGill University
Superviseur⋅e principal⋅e :
Stagiaire de recherche - McGill University
Collaborateur·rice alumni
Postdoctorat - Université de Montréal
Superviseur⋅e principal⋅e :
Postdoctorat - Université de Montréal
Superviseur⋅e principal⋅e :
Maîtrise recherche - McGill University
Collaborateur·rice de recherche - Georgia Tech
Doctorat - McGill University
Doctorat - McGill University
Doctorat - McGill University
Co-superviseur⋅e :
Stagiaire de recherche - McGill University
Postdoctorat - McGill University
Maîtrise recherche - McGill University
Doctorat - McGill University
Superviseur⋅e principal⋅e :
Doctorat - McGill University
Doctorat - McGill University
Postdoctorat - McGill University
Doctorat - McGill University
Superviseur⋅e principal⋅e :
Postdoctorat - McGill University
Co-superviseur⋅e :
Visiteur de recherche indépendant - University of Oregon
Doctorat - McGill University
Collaborateur·rice alumni
Stagiaire de recherche - University of Oslo
Maîtrise recherche - McGill University

Publications

Adversarial Feature Desensitization
Reza Bayat
Adam Ibrahim
Kartik Ahuja
Mojtaba Faramarzi
Touraj Laleh
Neural networks are known to be vulnerable to adversarial attacks -- slight but carefully constructed perturbations of the inputs which can … (voir plus)drastically impair the network's performance. Many defense methods have been proposed for improving robustness of deep networks by training them on adversarially perturbed inputs. However, these models often remain vulnerable to new types of attacks not seen during training, and even to slightly stronger versions of previously seen attacks. In this work, we propose a novel approach to adversarial robustness, which builds upon the insights from the domain adaptation field. Our method, called Adversarial Feature Desensitization (AFD), aims at learning features that are invariant towards adversarial perturbations of the inputs. This is achieved through a game where we learn features that are both predictive and robust (insensitive to adversarial attacks), i.e. cannot be used to discriminate between natural and adversarial data. Empirical results on several benchmarks demonstrate the effectiveness of the proposed approach against a wide range of attack types and attack strengths. Our code is available at https://github.com/BashivanLab/afd.
Your head is there to move you around: Goal-driven models of the primate dorsal pathway
Patrick J Mineault
Christopher C. Pack
Neurons in the dorsal visual pathway of the mammalian brain are selective for motion stimuli, with the complexity of stimulus representation… (voir plus)s increasing along the hierarchy. This progression is similar to that of the ventral visual pathway, which is well characterized by artificial neural networks (ANNs) optimized for object recognition. In contrast, there are no image-computable models of the dorsal stream with comparable explanatory power. We hypothesized that the properties of dorsal stream neurons could be explained by a simple learning objective: the need for an organism to orient itself during self-motion. To test this hypothesis, we trained a 3D ResNet to predict an agent’s self-motion parameters from visual stimuli in a simulated environment. We found that the responses in this network accounted well for the selectivity of neurons in a large database of single-neuron recordings from the dorsal visual stream of non-human primates. In contrast, ANNs trained on an action recognition dataset through supervised or self-supervised learning could not explain responses in the dorsal stream, despite also being trained on naturalistic videos with moving objects. These results demonstrate that an ecologically relevant cost function can account for dorsal stream properties in the primate brain.
Different scaling of linear models and deep learning in UKBiobank brain images versus machine-learning datasets
Marc-Andre Schulz
B.T. Thomas Yeo
Joshua T. Vogelstein
Janaina Mourao-Miranada
Jakob N. Kather
Konrad Paul Kording
Distinct roles of parvalbumin and somatostatin interneurons in gating the synchronization of spike times in the neocortex
Hyun Jae Jang
Hyowon Chung
James M. Rowland
Michael M Kohl
Jeehyun Kwag
Sensory information–driven spikes are synchronized across cortical layers by distinct subtypes of interneurons. Synchronization of precise… (voir plus) spike times across multiple neurons carries information about sensory stimuli. Inhibitory interneurons are suggested to promote this synchronization, but it is unclear whether distinct interneuron subtypes provide different contributions. To test this, we examined single-unit recordings from barrel cortex in vivo and used optogenetics to determine the contribution of parvalbumin (PV)– and somatostatin (SST)–positive interneurons to the synchronization of spike times across cortical layers. We found that PV interneurons preferentially promote the synchronization of spike times when instantaneous firing rates are low (12 Hz), whereas SST interneurons preferentially promote the synchronization of spike times when instantaneous firing rates are high (>12 Hz). Furthermore, using a computational model, we demonstrate that these effects can be explained by PV and SST interneurons having preferential contributions to feedforward and feedback inhibition, respectively. Our findings demonstrate that distinct subtypes of inhibitory interneurons have frequency-selective roles in the spatiotemporal synchronization of precise spike times.
Systems consolidation impairs behavioral flexibility
Sankirthana Sathiyakumar
Sofia Skromne Carrasco
Lydia Saad
Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits
Alexandre Payeur
Jordan Guerguiev
Friedemann Zenke
Richard Naud
Optogenetic activation of parvalbumin and somatostatin interneurons selectively restores theta-nested gamma oscillations and oscillation-induced spike timing-dependent long-term potentiation impaired by amyloid β oligomers
Kyerl Park
Jaedong Lee
Hyun Jae Jang
Michael M Kohl
Jeehyun Kwag
Spike-based causal inference for weight alignment
Jordan Guerguiev
Konrad Paul Kording
In artificial neural networks trained with gradient descent, the weights used for processing stimuli are also used during backward passes to… (voir plus) calculate gradients. For the real brain to approximate gradients, gradient information would have to be propagated separately, such that one set of synaptic weights is used for processing and another set is used for backward passes. This produces the so-called "weight transport problem" for biological models of learning, where the backward weights used to calculate gradients need to mirror the forward weights used to process stimuli. This weight transport problem has been considered so hard that popular proposals for biological learning assume that the backward weights are simply random, as in the feedback alignment algorithm. However, such random weights do not appear to work well for large networks. Here we show how the discontinuity introduced in a spiking system can lead to a solution to this problem. The resulting algorithm is a special case of an estimator used for causal inference in econometrics, regression discontinuity design. We show empirically that this algorithm rapidly makes the backward weights approximate the forward weights. As the backward weights become correct, this improves learning performance over feedback alignment on tasks such as Fashion-MNIST, SVHN, CIFAR-10 and VOC. Our results demonstrate that a simple learning rule in a spiking network can allow neurons to produce the right backward connections and thus solve the weight transport problem.
Forgetting at biologically realistic levels of neurogenesis in a large-scale hippocampal model
Lina M. Tran
Sheena A. Josselyn
Paul W. Frankland
A deep learning framework for neuroscience
Timothy P. Lillicrap
Philippe Beaudoin
Rafal Bogacz
Amelia Christensen
Claudia Clopath
Rui Ponte Costa
Archy de Berker
Surya Ganguli
Colleen J Gillon
Danijar Hafner
Adam Kepecs
Nikolaus Kriegeskorte
Peter Latham
Grace W. Lindsay
Kenneth D. Miller
Richard Naud
Christopher C. Pack
Panayiota Poirazi … (voir 12 de plus)
Pieter Roelfsema
João Sacramento
Andrew Saxe
Benjamin Scellier
Anna C. Schapiro
Walter Senn
Greg Wayne
Daniel Yamins
Friedemann Zenke
Joel Zylberberg
Denis Therien
Konrad Paul Kording
Dissociating memory accessibility and precision in forgetting
S. Berens
A. Horner
Dendritic solutions to the credit assignment problem
Timothy P. Lillicrap