Portrait de Paul Masset

Paul Masset

Membre affilié
Professeur adjoint, McGill University
Sujets de recherche
Apprentissage par renforcement
Cognition
NeuroIA
Neurosciences computationnelles

Biographie

Paul Masset est professeur adjoint au département de psychologie de l'Université McGill et travaille à l'intersection des neurosciences, de l'intelligence artificielle et des sciences cognitives. Son groupe de recherche s'attache à comprendre comment la structure des circuits neuronaux dote le cerveau de calculs distribués efficaces qui sous-tendent la cognition et comment nous pouvons tirer parti de ces principes pour concevoir des algorithmes d'apprentissage plus efficaces.

Avant de se joindre à McGill, il a été boursier postdoctoral à l'Université Harvard. Il a obtenu son doctorat au Cold Spring Harbor Laboratory, sa maîtrise en sciences cognitives à l'École des hautes études en sciences sociales (EHESS) et son M.Eng/B.A. en ingénierie de l'information et de l'informatique à l'Université de Cambridge.

Étudiants actuels

Doctorat - McGill
Superviseur⋅e principal⋅e :

Publications

Implicit Generative Modeling by Kernel Similarity Matching
Shubham Choudhary
Demba Ba
Understanding how the brain encodes stimuli has been a fundamental problem in computational neuroscience. Insights into this problem have le… (voir plus)d to the design and development of artificial neural networks that learn representations by incorporating brain-like learning abilities. Recently, learning representations by capturing similarity between input samples has been studied to tackle this problem. This approach, however, has thus far been used to only learn downstream features from an input and has not been studied in the context of a generative paradigm, where one can map the representations back to the input space, incorporating not only bottom-up interactions (stimuli to latent) but also learning features in a top-down manner (latent to stimuli). We investigate a kernel similarity matching framework for generative modeling. Starting with a modified sparse coding objective for learning representations proposed in prior work, we demonstrate that representation learning in this context is equivalent to maximizing similarity between the input kernel and a latent kernel. We show that an implicit generative model arises from learning the kernel structure in the latent space and show how the framework can be adapted to learn manifold structures, potentially providing insights as to how task representations can be encoded in the brain. To solve the objective, we propose a novel Alternate Direction Method of Multipliers (ADMM) based algorithm and discuss the interpretation of the optimization process. Finally, we discuss how this representation learning problem can lead towards a biologically plausible architecture to learn the model parameters that ties together representation learning using similarity matching (a bottom-up approach) with predictive coding (a top-down approach).
Implicit Generative Modeling by Kernel Similarity Matching
Shubham Choudhary
Demba Ba
Interpretable deep learning for deconvolutional analysis of neural signals
Bahareh Tolooshams
Sara Matias
Hao Wu
Simona Temereanca
Naoshige Uchida
Venkatesh N. Murthy
Demba Ba
Combining Sampling Methods with Attractor Dynamics in Spiking Models of Head-Direction Systems
Vojko Pjanovic
Jacob Zavatone-Veth
Sander Keemink
Michele Nardin
Uncertainty is a fundamental aspect of the natural environment, requiring the brain to infer and integrate noisy signals to guide behavior e… (voir plus)ffectively. Sampling-based inference has been proposed as a mechanism for dealing with uncertainty, particularly in early sensory processing. However, it is unclear how to reconcile sampling-based methods with operational principles of higher-order brain areas, such as attractor dynamics of persistent neural representations. In this study, we present a spiking neural network model for the head-direction (HD) system that combines sampling-based inference with attractor dynamics. To achieve this, we derive the required spiking neural network dynamics and interactions to perform sampling from a large family of probability distributions—including variables encoded with Poisson noise. We then propose a method that allows the network to update its estimate of the current head direction by integrating angular velocity samples—derived from noisy inputs—with a pull towards a circular manifold, thereby maintaining consistent attractor dynamics. This model makes specific, testable predictions about the HD system that can be examined in future neurophysiological experiments: it predicts correlated subthreshold voltage fluctuations; distinctive short- and long-term firing correlations among neurons; and characteristic statistics of the movement of the neural activity “bump” representing the head direction. Overall, our approach extends previous theories on probabilistic sampling with spiking neurons, offers a novel perspective on the computations responsible for orientation and navigation, and supports the hypothesis that sampling-based methods can be combined with attractor dynamics to provide a viable framework for studying neural dynamics across the brain.
Perception and neural representation of intermittent odor stimuli in mice
Luis Boero
Hao Wu
Joseph D. Zak
Farhad Pashakhanloo
Siddharth Jayakumar
Bahareh Tolooshams
Demba Ba
Venkatesh N. Murthy
Bounded optimality of time investments in rats, mice, and humans
Torben Ott
Marion Bosc
Joshua I. Sanders
Adam Kepecs
Self Supervised Dictionary Learning Using Kernel Matching
Shubham Choudhary
Demba Ba
We introduce a self supervised framework for learning representations in the context of dictionary learning. We cast the problem as a kernel… (voir plus) matching task between the input and the representation space, with constraints on the latent kernel. By adjusting these constraints, we demonstrate how the framework can adapt to different learning objectives. We then formulate a novel Alternate Direction Method of Multipli-ers (ADMM) based algorithm to solve the optimization problem and connect the dynamics to classical alternate minimization techniques. This approach offers a unique way of learning representations with kernel constraints, that enable us implicitly learn a generative map for the data from the learned representations which can have broad applications in representation learning tasks both in machine learning and neuro-science.
Self Supervised Dictionary Learning Using Kernel Matching
Shubham Choudhary
Demba Ba
We introduce a self supervised framework for learning representations in the context of dictionary learning. We cast the problem as a kernel… (voir plus) matching task between the input and the representation space, with constraints on the latent kernel. By adjusting these constraints, we demonstrate how the framework can adapt to different learning objectives. We then formulate a novel Alternate Direction Method of Multipli-ers (ADMM) based algorithm to solve the optimization problem and connect the dynamics to classical alternate minimization techniques. This approach offers a unique way of learning representations with kernel constraints, that enable us implicitly learn a generative map for the data from the learned representations which can have broad applications in representation learning tasks both in machine learning and neuro-science.