Portrait de Shahab Bakhtiari

Shahab Bakhtiari

Membre académique associé
Professeur adjoint, Université de Montréal, Département de psychologie
Sujets de recherche
Apprentissage de représentations
Apprentissage profond
Neurosciences computationnelles
Vision par ordinateur

Biographie

Shahab Bakhtiari est professeur adjoint au Département de psychologie de l'Université de Montréal et membre académique associé de Mila – Institut québécois d'intelligence artificielle. Il a obtenu un diplôme de premier cycle et un diplôme d'études supérieures en génie électrique à l'Université de Téhéran. Il a ensuite réalisé un doctorat en neurosciences à l'Université McGill, puis a été chercheur postdoctoral à Mila, où il s'est concentré sur la recherche à l'intersection des neurosciences et de l'intelligence artificielle. Dans ses travaux, il examine la perception visuelle et l'apprentissage dans les cerveaux biologiques et les réseaux neuronaux artificiels. Il utilise l'apprentissage profond comme cadre informatique pour modéliser l'apprentissage et la perception dans le cerveau, et tire parti de notre compréhension du système nerveux pour créer une intelligence artificielle d'inspiration plus biologique.

Étudiants actuels

Doctorat - UdeM
Superviseur⋅e principal⋅e :
Stagiaire de recherche - McGill University
Doctorat - UdeM
Superviseur⋅e principal⋅e :
Baccalauréat - UdeM
Maîtrise recherche - UdeM
Postdoctorat - UdeM
Doctorat - McGill
Superviseur⋅e principal⋅e :

Publications

Shaped by meaning, weighted by reliability: New insights into multisensory integration
Elizaveta Sycheva
Léa St-Gelais
Karim Jerbi CoCo Lab
Franco Lepore
Vanessa Hadid
Context-Aware World Models for Task-Agnostic Control
Busra Tugce Gurbuz
Christopher C. Pack
Eilif Benjamin Muller
Self-Supervised Learning from Structural Invariance
Why all roads don't lead to Rome: Representation geometry varies across the human visual cortical hierarchy
Zahraa Chorghay
Blake Aaron Richards
The curriculum effect in visual learning: the role of readout dimensionality
Christopher C. Pack
seq-JEPA: Autoregressive Predictive Learning of Invariant-Equivariant World Models
Current self-supervised algorithms commonly rely on transformations such as data augmentation and masking to learn visual representations. T… (voir plus)his is achieved by enforcing invariance or equivariance with respect to these transformations after encoding two views of an image. This dominant two-view paradigm often limits the flexibility of learned representations for downstream adaptation by creating performance trade-offs between high-level invariance-demanding tasks such as image classification and more fine-grained equivariance-related tasks. In this work, we proposes \emph{seq-JEPA}, a world modeling framework that introduces architectural inductive biases into joint-embedding predictive architectures to resolve this trade-off. Without relying on dual equivariance predictors or loss terms, seq-JEPA simultaneously learns two architecturally segregated representations: one equivariant to specified transformations and another invariant to them. To do so, our model processes short sequences of different views (observations) of inputs. Each encoded view is concatenated with an embedding of the relative transformation (action) that produces the next observation in the sequence. These view-action pairs are passed through a transformer encoder that outputs an aggregate representation. A predictor head then conditions this aggregate representation on the upcoming action to predict the representation of the next observation. Empirically, seq-JEPA demonstrates strong performance on both equivariant and invariant benchmarks without sacrificing one for the other. Furthermore, it excels at tasks that inherently require aggregating a sequence of observations, such as path integration across actions and predictive learning across eye movements.
Seeing the world as animals do: How to leverage generative AI for ecological neuroscience
Exploiting large-scale neuroimaging datasets to reveal novel insights in vision science
Peter Brotherwood
Catherine Landry
Jasper van den Bosch
Tim Kietzmann
Frédéric Gosselin
Adrien Doerig
Neural responses in space and time to a massive set of natural scenes
Peter Brotherwood
Emmanuel Lebeau
Mathias Salvas-Hébert
Marin Coignard
Frédéric Gosselin
Kendrick Kay
Asymmetric stimulus representations bias visual perceptual learning
Pooya Laamerad
Asmara Awada
Christopher C. Pack
The primate visual cortex contains various regions that exhibit specialization for different stimulus properties, such as motion, shape, and… (voir plus) color. Within each region, there is often further specialization, such that particular stimulus features, such as horizontal and vertical orientations, are over-represented. These asymmetries are associated with well-known perceptual biases, but little is known about how they influence visual learning. Most theories would predict that learning is optimal, in the sense that it is unaffected by these asymmetries. However, other approaches to learning would result in specific patterns of perceptual biases. To distinguish between these possibilities, we trained human observers to discriminate between expanding and contracting motion patterns, which have a highly asymmetrical representation in the visual cortex. Observers exhibited biased percepts of these stimuli, and these biases were affected by training in ways that were often suboptimal. We simulated different neural network models and found that a learning rule that involved only adjustments to decision criteria, rather than connection weights, could account for our data. These results suggest that cortical asymmetries influence visual perception and that human observers often rely on suboptimal strategies for learning.
Spatial Distribution Modeling of Pistacia atlantica using Artificial Neural Network in Khohir National Park
Tymour Rostani Shahraji
Reza Akhavan
Reza Ebrahimi Atani
Energy efficiency as a normative account for predictive coding