Portrait of Paul Masset

Paul Masset

Associate Academic Member
Assistant Professor, McGill University
Research Topics
Cognition
Computational Neuroscience
NeuroAI
Reinforcement Learning

Biography

Paul Masset is an Assistant Professor in the Department of Psychology at McGill University working at the intersection of neuroscience, AI and cognitive science. The focus of his research group is to understand how the structure of neural circuits endows the brain with efficient distributed computations underlying cognition and how we can leverage these principles to design more efficient learning algorithms.

Prior to joining McGill, he was a Postdoctoral Fellow at Harvard University. He obtained his PhD at Cold Spring Harbor Laboratory, his Masters in Cognitive Science at the École des hautes études en sciences sociales (EHESS) and his M.Eng/B.A. in Information and Computer Engineering at the University of Cambridge.

Current Students

PhD - McGill University
Undergraduate - McGill University
PhD - McGill University
Co-supervisor :
PhD - McGill University
Principal supervisor :
PhD - McGill University
Co-supervisor :

Publications

Simultaneous detection and estimation in olfactory sensing
Matthew Y. He
Venkatesh N. Murthy
Cengiz Pehlevan
Jacob A. Zavatone-Veth
The mammalian olfactory system shows an exceptional ability for rapid and accurate decoding of both the identity and concentration of odoran… (see more)ts. Previous works have used the theory of compressed sensing to elucidate the algorithmic basis for this capability: decoding odor information from the responses of a restricted repertoire of receptors is possible because only a few relevant odorants are present in any given sensory scene. However, existing circuit models for olfactory decoding still cannot contend with the complexity of naturalistic olfactory scenes; they are limited to detection of a handful of odorants. Here, we propose a model for olfactory compressed sensing inspired by simultaneous localization and mapping algorithms in navigation: the set of odors that are present in a given scene, and the concentration of those present odors, are inferred separately. To enable rapid inference of odor presence in a biologically-plausible recurrent circuit, our model leverages the framework of Mirrored Langevin Dynamics, which gives a general recipe for sampling from constrained distributions using rate-based dynamics. This results in a recurrent circuit model that can accurately infer presence and concentration at scale and can be mapped onto the primary cell types of the olfactory bulb. This frame-work offers a path towards circuit models—for olfactory sensing and beyond—that both perform well in naturalistic environments and make experimentally-testable predictions for neural response dynamics.
Multi-timescale reinforcement learning in the brain
Pablo Tano
HyungGoo R. Kim
Athar N. Malik
Alexandre Pouget
Naoshige Uchida
To thrive in complex environments, animals and artificial agents must learn to act adaptively to maximize fitness and rewards. Such adaptive… (see more) behavior can be learned through reinforcement learning1, a class of algorithms that has been successful at training artificial agents2–6 and at characterizing the firing of dopamine neurons in the midbrain7–9. In classical reinforcement learning, agents discount future rewards exponentially according to a single time scale, controlled by the discount factor. Here, we explore the presence of multiple timescales in biological reinforcement learning. We first show that reinforcement agents learning at a multitude of timescales possess distinct computational benefits. Next, we report that dopamine neurons in mice performing two behavioral tasks encode reward prediction error with a diversity of discount time constants. Our model explains the heterogeneity of temporal discounting in both cue-evoked transient responses and slower timescale fluctuations known as dopamine ramps. Crucially, the measured discount factor of individual neurons is correlated across the two tasks suggesting that it is a cell-specific property. Together, our results provide a new paradigm to understand functional heterogeneity in dopamine neurons, a mechanistic basis for the empirical observation that humans and animals use non-exponential discounts in many situations 10–14, and open new avenues for the design of more efficient reinforcement learning algorithms.
Implicit Generative Modeling by Kernel Similarity Matching
Shubham Choudhary
Demba Ba
Interpretable deep learning for deconvolutional analysis of neural signals
Bahareh Tolooshams
Sara Matias
Hao Wu
Simona Temereanca
Naoshige Uchida
Venkatesh N. Murthy
Demba Ba
The widespread adoption of deep learning to build models that capture the dynamics of neural populations is typically based on “black-box… (see more) approaches that lack an interpretable link between neural activity and network parameters. Here, we propose to apply algorithm unrolling, a method for interpretable deep learning, to design the architecture of sparse deconvolutional neural networks and obtain a direct interpretation of network weights in relation to stimulus-driven single-neuron activity through a generative model. We characterize our method, referred to as deconvolutional unrolled neural learning (DUNL), and show its versatility by applying it to deconvolve single-trial local signals across multiple brain areas and recording modalities. To exemplify use cases of our decomposition method, we uncover multiplexed salience and reward prediction error signals from midbrain dopamine neurons in an unbiased manner, perform simultaneous event detection and characterization in somatosensory thalamus recordings, and characterize the heterogeneity of neural responses in the piriform cortex and in the striatum during unstructured, naturalistic experiments. Our work leverages the advances in interpretable deep learning to gain a mechanistic understanding of neural activity.
Combining Sampling Methods with Attractor Dynamics in Spiking Models of Head-Direction Systems
Vojko Pjanovic
Jacob Zavatone-Veth
Sander Keemink
Michele Nardin
Uncertainty is a fundamental aspect of the natural environment, requiring the brain to infer and integrate noisy signals to guide behavior e… (see more)ffectively. Sampling-based inference has been proposed as a mechanism for dealing with uncertainty, particularly in early sensory processing. However, it is unclear how to reconcile sampling-based methods with operational principles of higher-order brain areas, such as attractor dynamics of persistent neural representations. In this study, we present a spiking neural network model for the head-direction (HD) system that combines sampling-based inference with attractor dynamics. To achieve this, we derive the required spiking neural network dynamics and interactions to perform sampling from a large family of probability distributions—including variables encoded with Poisson noise. We then propose a method that allows the network to update its estimate of the current head direction by integrating angular velocity samples—derived from noisy inputs—with a pull towards a circular manifold, thereby maintaining consistent attractor dynamics. This model makes specific, testable predictions about the HD system that can be examined in future neurophysiological experiments: it predicts correlated subthreshold voltage fluctuations; distinctive short- and long-term firing correlations among neurons; and characteristic statistics of the movement of the neural activity “bump” representing the head direction. Overall, our approach extends previous theories on probabilistic sampling with spiking neurons, offers a novel perspective on the computations responsible for orientation and navigation, and supports the hypothesis that sampling-based methods can be combined with attractor dynamics to provide a viable framework for studying neural dynamics across the brain.
Perception and neural representation of intermittent odor stimuli in mice
Luis Boero
Hao Wu
Joseph D. Zak
Farhad Pashakhanloo
Siddharth Jayakumar
Bahareh Tolooshams
Demba Ba
Venkatesh N. Murthy
Bounded optimality of time investments in rats, mice, and humans
Torben Ott
Marion Bosc
Joshua I. Sanders
Adam Kepecs
Self Supervised Dictionary Learning Using Kernel Matching
Shubham Choudhary
Demba Ba
We introduce a self supervised framework for learning representations in the context of dictionary learning. We cast the problem as a kernel… (see more) matching task between the input and the representation space, with constraints on the latent kernel. By adjusting these constraints, we demonstrate how the framework can adapt to different learning objectives. We then formulate a novel Alternate Direction Method of Multipli-ers (ADMM) based algorithm to solve the optimization problem and connect the dynamics to classical alternate minimization techniques. This approach offers a unique way of learning representations with kernel constraints, that enable us implicitly learn a generative map for the data from the learned representations which can have broad applications in representation learning tasks both in machine learning and neuro-science.