Portrait of Shahab Bakhtiari

Shahab Bakhtiari

Associate Academic Member
Assistant Professor, Université de Montréal, Department of Psychology
Research Topics
Computational Neuroscience
Computer Vision
Deep Learning
Representation Learning

Biography

Shahab Bakhtiari is an assistant professor in the Department of Psychology at Université de Montréal and an associate academic member of Mila – Quebec Artificial Intelligence Institute. Bakhtiari received his undergraduate and graduate degrees in electrical engineering from the University of Tehran. He then earned a PhD in neuroscience from McGill University and was a postdoctoral researcher at Mila, where he focused on research at the intersection of neuroscience and AI. His research examines visual perception and learning in both biological brains and artificial neural networks. He uses deep learning as a computational framework to model learning and perception in the brain, and aims to leverage our understanding of the nervous system to create more biologically inspired AI.

Current Students

Master's Research - Université de Montréal
Principal supervisor :
PhD - Université de Montréal
Independent visiting researcher
PhD - Université de Montréal
Principal supervisor :
Master's Research - McGill University
Principal supervisor :
Master's Research - Université de Montréal
Master's Research - McGill University
Principal supervisor :

Publications

Asymmetric stimulus representations bias visual perceptual learning
Pooya Laamerad
Asmara Awada
Christopher C. Pack
The primate visual cortex contains various regions that exhibit specialization for different stimulus properties, such as motion, shape, and… (see more) color. Within each region there is often further specialization, such that particular stimulus features, such as horizontal and vertical orientations, are overrepresented. These asymmetries are associated with well-known perceptual biases, but little is known about how they influence visual learning. Most theories would predict that learning is optimal, in the sense that it is unaffected by these asymmetries. But other approaches to learning would result in specific patterns of perceptual biases. To distinguish between these possibilities, we trained human observers to discriminate between expanding and contracting motion patterns, which have a highly asymmetrical representation in visual cortex. Observers exhibited biased percepts of these stimuli, and these biases were affected by training in ways that were often suboptimal. We simulated different neural network models and found that a learning rule that involved only adjustments to decision criteria, rather than connection weights, could account for our data. These results suggest that cortical asymmetries influence visual perception and that human observers often rely on suboptimal strategies for learning.
Energy efficiency as a normative account for predictive coding
The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning
Patrick J Mineault
Timothy P. Lillicrap
Christopher C. Pack
The visual system of mammals is comprised of parallel, hierarchical specialized pathways. Different pathways are specialized in so far as th… (see more)ey use representations that are more suitable for supporting specific downstream behaviours. In particular, the clearest example is the specialization of the ventral (“what”) and dorsal (“where”) pathways of the visual cortex. These two pathways support behaviours related to visual recognition and movement, respectively. To-date, deep neural networks have mostly been used as models of the ventral, recognition pathway. However, it is unknown whether both pathways can be modelled with a single deep ANN. Here, we ask whether a single model with a single loss function can capture the properties of both the ventral and the dorsal pathways. We explore this question using data from mice, who like other mammals, have specialized pathways that appear to support recognition and movement behaviours. We show that when we train a deep neural network architecture with two parallel pathways using a self-supervised predictive loss function, we can outperform other models in fitting mouse visual cortex. Moreover, we can model both the dorsal and ventral pathways. These results demonstrate that a self-supervised predictive learning approach applied to parallel pathway architectures can account for some of the functional specialization seen in mammalian visual systems.
Parallel inference of hierarchical latent dynamics in two-photon calcium imaging of neuronal populations
Luke Y. Prince
Colleen J Gillon
Dynamic latent variable modelling has provided a powerful tool for understanding how populations of neurons compute. For spiking data, such … (see more)latent variable modelling can treat the data as a set of point-processes, due to the fact that spiking dynamics occur on a much faster timescale than the computational dynamics being inferred. In contrast, for other experimental techniques, the slow dynamics governing the observed data are similar in timescale to the computational dynamics that researchers want to infer. An example of this is in calcium imaging data, where calcium dynamics can have timescales on the order of hundreds of milliseconds. As such, the successful application of dynamic latent variable modelling to modalities like calcium imaging data will rest on the ability to disentangle the deeper- and shallower-level dynamical systems’ contributions to the data. To-date, no techniques have been developed to directly achieve this. Here we solve this problem by extending recent advances using sequential variational autoencoders for dynamic latent variable modelling of neural data. Our system VaLPACa (Variational Ladders for Parallel Autoencoding of Calcium imaging data) solves the problem of disentangling deeper- and shallower-level dynamics by incorporating a ladder architecture that can infer a hierarchy of dynamical systems. Using some built-in inductive biases for calcium dynamics, we show that we can disentangle calcium flux from the underlying dynamics of neural computation. First, we demonstrate with synthetic calcium data that we can correctly disentangle an underlying Lorenz attractor from calcium dynamics. Next, we show that we can infer appropriate rotational dynamics in spiking data from macaque motor cortex after it has been converted into calcium fluorescence data via a calcium dynamics model. Finally, we show that our method applied to real calcium imaging data from primary visual cortex in mice allows us to infer latent factors that carry salient sensory information about unexpected stimuli. These results demonstrate that variational ladder autoencoders are a promising approach for inferring hierarchical dynamics in experimental settings where the measured variable has its own slow dynamics, such as calcium imaging data. Our new, open-source tool thereby provides the neuroscience community with the ability to apply dynamic latent variable modelling to a wider array of data modalities.
Your head is there to move you around: Goal-driven models of the primate dorsal pathway
Patrick J Mineault
Christopher C. Pack
Neurons in the dorsal visual pathway of the mammalian brain are selective for motion stimuli, with the complexity of stimulus representation… (see more)s increasing along the hierarchy. This progression is similar to that of the ventral visual pathway, which is well characterized by artificial neural networks (ANNs) optimized for object recognition. In contrast, there are no image-computable models of the dorsal stream with comparable explanatory power. We hypothesized that the properties of dorsal stream neurons could be explained by a simple learning objective: the need for an organism to orient itself during self-motion. To test this hypothesis, we trained a 3D ResNet to predict an agent’s self-motion parameters from visual stimuli in a simulated environment. We found that the responses in this network accounted well for the selectivity of neurons in a large database of single-neuron recordings from the dorsal visual stream of non-human primates. In contrast, ANNs trained on an action recognition dataset through supervised or self-supervised learning could not explain responses in the dorsal stream, despite also being trained on naturalistic videos with moving objects. These results demonstrate that an ecologically relevant cost function can account for dorsal stream properties in the primate brain.