Portrait of Paul Masset

Paul Masset

Associate Academic Member
Assistant Professor, McGill University
Research Topics
Cognition
Computational Neuroscience
NeuroAI
Reinforcement Learning

Biography

Paul Masset is an Assistant Professor in the Department of Psychology at McGill University working at the intersection of neuroscience, AI and cognitive science. The focus of his research group is to understand how the structure of neural circuits endows the brain with efficient distributed computations underlying cognition and how we can leverage these principles to design more efficient learning algorithms.

Prior to joining McGill, he was a Postdoctoral Fellow at Harvard University. He obtained his PhD at Cold Spring Harbor Laboratory, his Masters in Cognitive Science at the École des hautes études en sciences sociales (EHESS) and his M.Eng/B.A. in Information and Computer Engineering at the University of Cambridge.

Current Students

PhD - McGill University
Undergraduate - McGill University
PhD - McGill University
Co-supervisor :
PhD - McGill University
Principal supervisor :
PhD - McGill University
Co-supervisor :

Publications

Simultaneous detection and estimation in olfactory sensing
Matthew Y. He
Venkatesh N. Murthy
Cengiz Pehlevan
Jacob A. Zavatone-Veth
The mammalian olfactory system shows an exceptional ability for rapid and accurate decoding of both the identity and concentration of odoran… (see more)ts. Previous works have used the theory of compressed sensing to elucidate the algorithmic basis for this capability: decoding odor information from the responses of a restricted repertoire of receptors is possible because only a few relevant odorants are present in any given sensory scene. However, existing circuit models for olfactory decoding still cannot contend with the complexity of naturalistic olfactory scenes; they are limited to detection of a handful of odorants. Here, we propose a model for olfactory compressed sensing inspired by simultaneous localization and mapping algorithms in navigation: the set of odors that are present in a given scene, and the concentration of those present odors, are inferred separately. To enable rapid inference of odor presence in a biologically-plausible recurrent circuit, our model leverages the framework of Mirrored Langevin Dynamics, which gives a general recipe for sampling from constrained distributions using rate-based dynamics. This results in a recurrent circuit model that can accurately infer presence and concentration at scale and can be mapped onto the primary cell types of the olfactory bulb. This frame-work offers a path towards circuit models—for olfactory sensing and beyond—that both perform well in naturalistic environments and make experimentally-testable predictions for neural response dynamics.
Implicit Generative Modeling by Kernel Similarity Matching
Shubham Choudhary
Demba Ba
Understanding how the brain encodes stimuli has been a fundamental problem in computational neuroscience. Insights into this problem have le… (see more)d to the design and development of artificial neural networks that learn representations by incorporating brain-like learning abilities. Recently, learning representations by capturing similarity between input samples has been studied to tackle this problem. This approach, however, has thus far been used to only learn downstream features from an input and has not been studied in the context of a generative paradigm, where one can map the representations back to the input space, incorporating not only bottom-up interactions (stimuli to latent) but also learning features in a top-down manner (latent to stimuli). We investigate a kernel similarity matching framework for generative modeling. Starting with a modified sparse coding objective for learning representations proposed in prior work, we demonstrate that representation learning in this context is equivalent to maximizing similarity between the input kernel and a latent kernel. We show that an implicit generative model arises from learning the kernel structure in the latent space and show how the framework can be adapted to learn manifold structures, potentially providing insights as to how task representations can be encoded in the brain. To solve the objective, we propose a novel Alternate Direction Method of Multipliers (ADMM) based algorithm and discuss the interpretation of the optimization process. Finally, we discuss how this representation learning problem can lead towards a biologically plausible architecture to learn the model parameters that ties together representation learning using similarity matching (a bottom-up approach) with predictive coding (a top-down approach).
Implicit Generative Modeling by Kernel Similarity Matching
Shubham Choudhary
Demba Ba
Interpretable deep learning for deconvolutional analysis of neural signals
Bahareh Tolooshams
Sara Matias
Hao Wu
Simona Temereanca
Naoshige Uchida
Venkatesh N. Murthy
Demba Ba
Combining Sampling Methods with Attractor Dynamics in Spiking Models of Head-Direction Systems
Vojko Pjanovic
Jacob Zavatone-Veth
Sander Keemink
Michele Nardin
Uncertainty is a fundamental aspect of the natural environment, requiring the brain to infer and integrate noisy signals to guide behavior e… (see more)ffectively. Sampling-based inference has been proposed as a mechanism for dealing with uncertainty, particularly in early sensory processing. However, it is unclear how to reconcile sampling-based methods with operational principles of higher-order brain areas, such as attractor dynamics of persistent neural representations. In this study, we present a spiking neural network model for the head-direction (HD) system that combines sampling-based inference with attractor dynamics. To achieve this, we derive the required spiking neural network dynamics and interactions to perform sampling from a large family of probability distributions—including variables encoded with Poisson noise. We then propose a method that allows the network to update its estimate of the current head direction by integrating angular velocity samples—derived from noisy inputs—with a pull towards a circular manifold, thereby maintaining consistent attractor dynamics. This model makes specific, testable predictions about the HD system that can be examined in future neurophysiological experiments: it predicts correlated subthreshold voltage fluctuations; distinctive short- and long-term firing correlations among neurons; and characteristic statistics of the movement of the neural activity “bump” representing the head direction. Overall, our approach extends previous theories on probabilistic sampling with spiking neurons, offers a novel perspective on the computations responsible for orientation and navigation, and supports the hypothesis that sampling-based methods can be combined with attractor dynamics to provide a viable framework for studying neural dynamics across the brain.
Perception and neural representation of intermittent odor stimuli in mice
Luis Boero
Hao Wu
Joseph D. Zak
Farhad Pashakhanloo
Siddharth Jayakumar
Bahareh Tolooshams
Demba Ba
Venkatesh N. Murthy
Bounded optimality of time investments in rats, mice, and humans
Torben Ott
Marion Bosc
Joshua I. Sanders
Adam Kepecs
Self Supervised Dictionary Learning Using Kernel Matching
Shubham Choudhary
Demba Ba
We introduce a self supervised framework for learning representations in the context of dictionary learning. We cast the problem as a kernel… (see more) matching task between the input and the representation space, with constraints on the latent kernel. By adjusting these constraints, we demonstrate how the framework can adapt to different learning objectives. We then formulate a novel Alternate Direction Method of Multipli-ers (ADMM) based algorithm to solve the optimization problem and connect the dynamics to classical alternate minimization techniques. This approach offers a unique way of learning representations with kernel constraints, that enable us implicitly learn a generative map for the data from the learned representations which can have broad applications in representation learning tasks both in machine learning and neuro-science.
Self Supervised Dictionary Learning Using Kernel Matching
Shubham Choudhary
Demba Ba
We introduce a self supervised framework for learning representations in the context of dictionary learning. We cast the problem as a kernel… (see more) matching task between the input and the representation space, with constraints on the latent kernel. By adjusting these constraints, we demonstrate how the framework can adapt to different learning objectives. We then formulate a novel Alternate Direction Method of Multipli-ers (ADMM) based algorithm to solve the optimization problem and connect the dynamics to classical alternate minimization techniques. This approach offers a unique way of learning representations with kernel constraints, that enable us implicitly learn a generative map for the data from the learned representations which can have broad applications in representation learning tasks both in machine learning and neuro-science.