Portrait de Blake Richards

Blake Richards

Chaire en IA Canada-CIFAR
McGill University, École d'informatique et Département de neurologie et de neurochirurgie
Google
Sujets de recherche
Apprentissage de représentations
Apprentissage par renforcement
Modèles génératifs
Neurosciences computationnelles

Biographie

Blake Richards est directeur de recherche au sein de l'équipe Paradigms of Intelligence chez Google et professeur agrégé à l'École d'informatique et au Département de neurologie et de neurochirurgie de l'Université McGill. Il est également et membre académique principal à Mila - Institut québécois d'intelligence artificielle.

Ses recherches se situent à l'intersection des neurosciences et de l'intelligence artificielle. Son laboratoire étudie les principes universels de l'intelligence qui s'appliquent aux agents naturels et artificiels.

Il a reçu plusieurs distinctions pour ses travaux, notamment une bourse Arthur-B.-McDonald du Conseil de recherches en sciences naturelles et en génie du Canada (CRSNG) en 2022, le Prix du jeune chercheur de l'Association canadienne des neurosciences en 2019 et une chaire en IA Canada-CIFAR en 2018. M. Richards a en outre été titulaire d'une bourse postdoctorale Banting à l'hôpital SickKids de 2011 à 2013. Il a obtenu un doctorat en neurosciences de l'Université d'Oxford en 2010 et une licence en sciences cognitives et en IA de l'Université de Toronto en 2004.

Étudiants actuels

Postdoctorat - UdeM
Superviseur⋅e principal⋅e :
Doctorat - McGill
Co-superviseur⋅e :
Doctorat - McGill
Visiteur de recherche indépendant - NYU
Doctorat - McGill
Superviseur⋅e principal⋅e :
Doctorat - McGill
Collaborateur·rice alumni - McGill
Baccalauréat - McGill
Doctorat - McGill
Postdoctorat - McGill
Co-superviseur⋅e :
Visiteur de recherche indépendant - UdeM
Collaborateur·rice alumni - McGill
Doctorat - McGill
Doctorat - McGill
Postdoctorat - McGill
Doctorat - McGill
Doctorat - McGill
Doctorat - UdeM
Superviseur⋅e principal⋅e :
Collaborateur·rice alumni - McGill
Visiteur de recherche indépendant - Université de Montréal
Doctorat - McGill
Co-superviseur⋅e :
Doctorat - McGill
Co-superviseur⋅e :
Doctorat - McGill
Superviseur⋅e principal⋅e :
Maîtrise recherche - McGill
Visiteur de recherche indépendant - NA
Maîtrise recherche - McGill
Doctorat - McGill
Maîtrise recherche - McGill
Co-superviseur⋅e :
Visiteur de recherche indépendant - York University
Doctorat - Concordia
Superviseur⋅e principal⋅e :

Publications

Top-down feedback matters: Functional impact of brainlike connectivity motifs on audiovisual integration
Artificial neural networks (ANNs) are an important tool for studying neural computation, but many features of the brain are not captured by … (voir plus)standard ANN architectures. One notable missing feature in most ANN models is top-down feedback, i.e. projections from higher-order layers to lower-order layers in the network. Top-down feedback is ubiquitous in the brain, and it has a unique modulatory impact on activity in neocortical pyramidal neurons. However, we still do not understand its computational role. Here we develop a deep neural network model that captures the core functional properties of top-down feedback in the neocortex, allowing us to construct hierarchical recurrent ANN models that more closely reflect the architecture of the brain. We use this to explore the impact of different hierarchical recurrent architectures on an audiovisual integration task. We find that certain hierarchies, namely those that mimic the architecture of the human brain, impart ANN models with a light visual bias similar to that seen in humans. This bias does not impair performance on the audiovisual tasks. The results further suggest that different configurations of top-down feedback make otherwise identically connected models functionally distinct from each other, and from traditional feedforward-only models. Altogether our findings demonstrate that modulatory top-down feedback is a computationally relevant feature of biological brains, and that incorporating it into ANNs can affect their behavior and helps to determine the solutions that the network can discover.
Brain-like learning with exponentiated gradients
Kaiwen Sheng
Brendan A. Bicknell
Beverley A. Clark
Top-down feedback matters: Functional impact of brainlike connectivity motifs on audiovisual integration
The oneirogen hypothesis: modeling the hallucinatory effects of classical psychedelics in terms of replay-dependent plasticity mechanisms
Harnessing small projectors and multiple views for efficient vision pretraining
Learning Successor Features the Simple Way
In Deep Reinforcement Learning (RL), it is a challenge to learn representations that do not exhibit catastrophic forgetting or interference … (voir plus)in non-stationary environments. Successor Features (SFs) offer a potential solution to this challenge. However, canonical techniques for learning SFs from pixel-level observations often lead to representation collapse, wherein representations degenerate and fail to capture meaningful variations in the data. More recent methods for learning SFs can avoid representation collapse, but they often involve complex losses and multiple learning phases, reducing their efficiency. We introduce a novel, simple method for learning SFs directly from pixels. Our approach uses a combination of a Temporal-difference (TD) loss and a reward prediction loss, which together capture the basic mathematical definition of SFs. We show that our approach matches or outperforms existing SF learning techniques in both 2D (Minigrid), 3D (Miniworld) mazes and Mujoco, for both single and continual learning scenarios. As well, our technique is efficient, and can reach higher levels of performance in less time than other approaches. Our work provides a new, streamlined technique for learning SFs directly from pixel observations, with no pretraining required.
Towards a "Universal Translator" for Neural Dynamics at Single-Cell, Single-Spike Resolution
Yizi Zhang
Yanchen Wang
Donato M. Jiménez-Benetó
Zixuan Wang
Mehdi Azabou
Renee Tung
Olivier Winter
International Brain Laboratory
Eva L Dyer
Liam Paninski
Cole Lincoln Hurwitz
Stochastic Wiring of Cell Types Enhances Fitness by Generating Phenotypic Variability
Augustine N. Mavor-Parker
Anthony Zador
The development of neural connectivity is a crucial biological process that gives rise to diverse brain circuits and behaviors. Neural devel… (voir plus)opment is a stochastic process, but this stochasticity is often treated as a nuisance to overcome rather than as a functional advantage. Here we use a computational model, in which connection probabilities between discrete cell types are genetically specified, to investigate the benefits of stochasticity in the development of neural wiring. We show that this model can be viewed as a generalization of a powerful class of artificial neural networks—Bayesian neural networks—where each network parameter is a sample from a distribution. Our results reveal that stochasticity confers a greater benefit in large networks and variable environments, which may explain its role in organisms with larger brains. Surprisingly, we find that the average fitness over a population of agents is higher than a single agent defined by the average connection probability. Our model reveals how developmental stochasticity, by inducing a form of non-heritable phenotypic variability, can increase the probability that at least some individuals will survive in rapidly changing, unpredictable environments. Our results suggest how stochasticity may be an important feature rather than a bug in neural development.
Interpretability in Action: Exploratory Analysis of VPT, a Minecraft Agent
Karolis Jucys
Stephanie Milani
Mohammad Reza Samsami
Özgür Şimşek
Understanding the mechanisms behind decisions taken by large foundation models in sequential tasks is critical to ensuring that such systems… (voir plus) operate transparently and safely. However, interpretability methods have not yet been applied extensively to large-scale agents based on reinforcement learning. In this work, we perform exploratory analysis on the Video PreTraining (VPT) Minecraft playing agent, one of the largest open-source vision-based agents. We try to illuminate its reasoning mechanisms by applying various interpretability techniques. First, we analyze the attention mechanism while the agent solves its training task --- crafting a diamond pickaxe. The agent seems to pay attention to the 4 last frames and several key-frames further back. This provides clues as to how it maintains coherence in the task that takes 3-10 minutes, despite the agent's short memory span of only six seconds. Second, we perform various interventions, which help us uncover a worrying case of goal misgeneralization: VPT mistakenly identifies a villager wearing brown clothes as a tree trunk and punches it to death, when positioned stationary under green tree leaves. We demonstrate similar misbehavior in a related agent (STEVE-1), which motivates the use of VPT as a model organism for large-scale vision-based agent interpretability.
Sequential predictive learning is a unifying theory for hippocampal representation and replay
The mammalian hippocampus contains a cognitive map that represents an animal’s position in the environment 1 and generates offline “repl… (voir plus)ay” 2,3 for the purposes of recall 4, planning 5,6, and forming long term memories 7. Recently, it’s been found that artificial neural networks trained to predict sensory inputs develop spatially tuned cells 8, aligning with predictive theories of hippocampal function 9–11. However, whether predictive learning can also account for the ability to produce offline replay is unknown. Here, we find that spatially-tuned cells, which robustly emerge from all forms of predictive learning, do not guarantee the presence of a cognitive map with the ability to generate replay. Offline simulations only emerged in networks that used recurrent connections and head-direction information to predict multi-step observation sequences, which promoted the formation of a continuous attractor reflecting the geometry of the environment. These offline trajectories were able to show wake-like statistics, autonomously replay recently experienced locations, and could be directed by a virtual head direction signal. Further, we found that networks trained to make cyclical predictions of future observation sequences were able to rapidly learn a cognitive map and produced sweeping representations of future positions reminiscent of hippocampal theta sweeps 12. These results demonstrate how hippocampal-like representation and replay can emerge in neural networks engaged in predictive learning, and suggest that hippocampal theta sequences reflect a circuit that implements a data-efficient algorithm for sequential predictive learning. Together, this framework provides a unifying theory for hippocampal functions and hippocampal-inspired approaches to artificial intelligence.
Stimulus information guides the emergence of behavior-related signals in primary somatosensory cortex during learning.
Mariangela Panniello
Colleen J Gillon
Roberto Maffulli
Marco Celotto
Stefano Panzeri
Michael M Kohl
Sequential predictive learning is a unifying theory for hippocampal representation and replay
The mammalian hippocampus contains a cognitive map that represents an animal’s position in the environment 1 and generates offline “repl… (voir plus)ay” 2,3 for the purposes of recall 4, planning 5,6, and forming long term memories 7. Recently, it’s been found that artificial neural networks trained to predict sensory inputs develop spatially tuned cells 8, aligning with predictive theories of hippocampal function 9–11. However, whether predictive learning can also account for the ability to produce offline replay is unknown. Here, we find that spatially tuned cells, which robustly emerge from all forms of predictive learning, do not guarantee the presence of a cognitive map with the ability to generate replay. Offline simulations only emerged in networks that used recurrent connections and head-direction information to predict multi-step observation sequences, which promoted the formation of a continuous attractor reflecting the geometry of the environment. These offline trajectories were able to show wake-like statistics, autonomously replay recently experienced locations, and could be directed by a virtual head direction signal. Further, we found that networks trained to make cyclical predictions of future observation sequences were able to rapidly learn a cognitive map and produced sweeping representations of future positions reminiscent of hippocampal theta sweeps 12. These results demonstrate how hippocampal-like representation and replay can emerge in neural networks engaged in predictive learning, and suggest that hippocampal theta sequences reflect a circuit that implements a data-efficient algorithm for sequential predictive learning. Together, this framework provides a unifying theory for hippocampal functions and hippocampal-inspired approaches to artificial intelligence.