Portrait de Pouya Bashivan n'est pas disponible

Pouya Bashivan

Membre académique associé
Professeur adjoint, McGill University, Département de physiologie
Sujets de recherche
Neurosciences computationnelles

Biographie

Pouya Bashivan est professeur adjoint au Département de physiologie et membre du programme intégré en neurosciences de l'Université McGill, ainsi que membre associé de Mila – Institut québécois d'intelligence artificielle. Avant de se joindre à l'Université McGill, il a été chercheur postdoctoral à Mila, travaillant avec Irina Rish et Blake Richards. Auparavant, il a été chercheur postdoctoral au Département des sciences du cerveau et de la cognition et à l'Institut McGovern pour la recherche sur le cerveau du Massachusetts Institute of Technology (MIT), où il a travaillé avec le professeur James DiCarlo. Il a obtenu un doctorat en génie informatique de l'Université de Memphis en 2016, après avoir obtenu une licence et une maîtrise en ingénierie électrique et de contrôle de l'Université KNT (Téhéran, Iran).

L'objectif de la recherche menée à son laboratoire est de développer des modèles de réseaux neuronaux qui exploitent la mémoire pour résoudre des tâches complexes. Alors que nous nous appuyons souvent sur des mesures de performance des tâches pour trouver des modèles de réseaux neuronaux et des algorithmes d'apprentissage améliorés, nous utilisons également des mesures neuronales et comportementales provenant de cerveaux d’humains et d'autres animaux pour évaluer la similitude de ces modèles avec des cerveaux biologiquement évolués. Nous pensons que ces contraintes supplémentaires pourraient accélérer les progrès vers l'ingénierie d'un agent artificiellement intelligent de niveau humain.

Étudiants actuels

Maîtrise recherche - McGill
Maîtrise recherche - McGill
Doctorat - McGill
Maîtrise recherche - McGill
Doctorat - McGill
Doctorat - McGill
Co-superviseur⋅e :
Doctorat - McGill

Publications

Stable Deep Reinforcement Learning via Isotropic Gaussian Representations
Deep reinforcement learning systems often suffer from unstable training dynamics due to non-stationarity, where learning objectives and data… (voir plus) distributions evolve over time. We show that under non-stationary targets, isotropic Gaussian embeddings are provably advantageous. In particular, they induce stable tracking of time-varying targets for linear readouts, achieve maximal entropy under a fixed variance budget, and encourage a balanced use of all representational dimensions--all of which enable agents to be more adaptive and stable. Building on this insight, we propose the use of Sketched Isotropic Gaussian Regularization for shaping representations toward an isotropic Gaussian distribution during training. We demonstrate empirically, over a variety of domains, that this simple and computationally inexpensive method improves performance under non-stationarity while reducing representation collapse, neuron dormancy, and training instability.
SPECTRE: Spectral Pre-training Embeddings with Cylindrical Temporal Rotary Position Encoding for Fine-Grained sEMG-Based Movement Decoding
Zihan Weng
Chanlin Yi
Jing Lu
Fali Li
Dezhong Yao 0001
Jingming Hou
Yangsong Zhang
Peng Xu
Decoding fine-grained movement from non-invasive surface Electromyography (sEMG) is a challenge for prosthetic control due to signal non-sta… (voir plus)tionarity and low signal-to-noise ratios. Generic self-supervised learning (SSL) frameworks often yield suboptimal results on sEMG as they attempt to reconstruct noisy raw signals and lack the inductive bias to model the cylindrical topology of electrode arrays. To overcome these limitations, we introduce SPECTRE, a domain-specific SSL framework. SPECTRE features two primary contributions: a physiologically-grounded pre-training task and a novel positional encoding. The pre-training involves masked prediction of discrete pseudo-labels from clustered Short-Time Fourier Transform (STFT) representations, compelling the model to learn robust, physiologically relevant frequency patterns. Additionally, our Cylindrical Rotary Position Embedding (CyRoPE) factorizes embeddings along linear temporal and annular spatial dimensions, explicitly modeling the forearm sensor topology to capture muscle synergies. Evaluations on multiple datasets, including challenging data from individuals with amputation, demonstrate that SPECTRE establishes a new state-of-the-art for movement decoding, significantly outperforming both supervised baselines and generic SSL approaches. Ablation studies validate the critical roles of both spectral pre-training and CyRoPE. SPECTRE provides a robust foundation for practical myoelectric interfaces capable of handling real-world sEMG complexities.
Caption This, Reason That: VLMs Caught in the Middle
Zihan Weng
Taylor Whittington Webb
Vision-Language Models (VLMs) have shown remarkable progress in visual understanding in recent years. Yet, they still lag behind human capab… (voir plus)ilities in specific visual tasks such as counting or relational reasoning. To understand the underlying limitations, we adopt methodologies from cognitive science, analyzing VLM performance along core cognitive axes: Perception, Attention, and Memory. Using a suite of tasks targeting these abilities, we evaluate state-of-the-art VLMs, including GPT-4o. Our analysis reveals distinct cognitive profiles: while advanced models approach ceiling performance on some tasks (e.g. category identification), a significant gap persists, particularly in tasks requiring spatial understanding or selective attention. Investigating the source of these failures and potential methods for improvement, we employ a vision-text decoupling analysis, finding that models struggling with direct visual reasoning show marked improvement when reasoning over their own generated text captions. These experiments reveal a strong need for improved VLM Chain-of-Thought (CoT) abilities, even in models that consistently exceed human performance. Furthermore, we demonstrate the potential of targeted fine-tuning on composite visual reasoning tasks and show that fine-tuning smaller VLMs substantially improves core cognitive abilities. While this improvement does not translate to large enhancements on challenging, out-of-distribution benchmarks, we show broadly that VLM performance on our datasets strongly correlates with performance on these other benchmarks. Our work provides a detailed analysis of VLM cognitive strengths and weaknesses and identifies key bottlenecks in simultaneous perception and reasoning while also providing an effective and simple solution.
A Geometric Lens on RL Environment Complexity Based on Ricci Curvature
We introduce Ollivier-Ricci Curvature (ORC) as an information-geometric tool for analyzing the local structure of reinforcement learning (RL… (voir plus)) environments. We establish a novel connection between ORC and the Successor Representation (SR), enabling a geometric interpretation of environment dynamics decoupled from reward signals. Our analysis shows that states with positive and negative ORC values correspond to regions where random walks converge and diverge respectively, which are often critical for effective exploration. ORC is highly correlated with established environment complexity metrics, yet integrates naturally with standard RL frameworks based on SR and provides both global and local complexity measures. Leveraging this property, we propose an ORC-based intrinsic reward that guides agents toward divergent regions and away from convergent traps. Empirical results demonstrate that our curvature-driven reward substantially improves exploration performance across diverse environments, outperforming both random and count-based intrinsic baselines.
Spatially and non-spatially tuned hippocampal neurons are linear perceptual and nonlinear memory encoders
Kaicheng Yan
Benjamin Corrigan
Roberto Gulli
Julio Martinez-Trujillo
The hippocampus has long been regarded as a neural map of physical space, with its neurons categorized as spatially or non-spatially tuned a… (voir plus)ccording to their response selectivity. However, growing evidence suggests that this dichotomy oversimplifies the complex roles hippocampal neurons play in integrating spatial and non-spatial information. Through computational modeling and in-vivo electrophysiology in macaques, we show that neurons classified as spatially tuned primarily encode linear combinations of immediate behaviorally relevant factors, while those labeled as non-spatially tuned rely on nonlinear mechanisms to integrate temporally distant experiences. Furthermore, our findings reveal a temporal gradient in the primate CA3 region, where spatial selectivity diminishes as neurons encode increasingly distant past events. Finally, using artificial neural networks, we demonstrate that nonlinear recurrent connections are crucial for capturing the response dynamics of non-spatially tuned neurons, particularly those encoding memory-related information. These findings challenge the traditional dichotomy of spatial versus non-spatial representations and instead suggest a continuum of linear and nonlinear computations that underpin hippocampal function. This framework provides new insights into how the hippocampus bridges perception and memory, informing on its role in episodic memory, spatial navigation, and associative learning.
Neural signatures of associational cortex emerge in a goal-directed model of visual search
Building spatial world models from sparse transitional episodic memories
Many animals possess a remarkable capacity to rapidly construct flexible mental models of their environments. These world models are crucial… (voir plus) for ethologically relevant behaviors such as navigation, exploration, and planning. The ability to form episodic memories and make inferences based on these sparse experiences is believed to underpin the efficiency and adaptability of these models in the brain. Here, we ask: Can a neural network learn to construct a spatial model of its surroundings from sparse and disjoint episodic memories? We formulate the problem in a simulated world and propose a novel framework, the Episodic Spatial World Model (ESWM), as a potential answer. We show that ESWM is highly sample-efficient, requiring minimal observations to construct a robust representation of the environment. It is also inherently adaptive, allowing for rapid updates when the environment changes. In addition, we demonstrate that ESWM readily enables near-optimal strategies for exploring novel environments and navigating between arbitrary points, all without the need for additional training.
Burst firing optimizes invariant coding of natural communication signals by electrosensory neural populations
Michael G. Metzen
Amin Akhshi
Anmar Khadra
Maurice J. Chacron
Accurate perception of objects within the environment independent of context is essential for the survival of an organism. While neurons tha… (voir plus)t respond in an invariant manner to different stimulus waveforms resulting from identitypreserving transformations of objects are thought to provide a neural correlate of context-independent perception, how such responses emerge in the brain remains poorly understood. Here, we demonstrate that burst firing in neural populations can give rise to an invariant representation of highly heterogeneous natural communication stimuli. Multi-unit recordings from central sensory neural populations showed that considering burst spike trains led to invariant representations at the population but not the single neuron level. Computational modeling further revealed that optimal invariance is achieved at burst firing levels seen experimentally. Taken together, our results demonstrate an important function for burst firing toward establishing invariant representations of sensory input in neural populations.
Learning adversarially robust kernel ensembles with kernel average pooling.
Amirozhan Dehghani
Yifei Ren
Credit-Based Self Organizing Maps: Training Deep Topographic Networks with Minimal Performance Degradation
Amir Ozhan Dehghani
Xinyu Qian
Asa Farahani
Real-time fine finger motion decoding for transradial amputees with surface electromyography
Zihan Weng
Yang Xiao
Peiyang Li
Chanlin Yi
Hailin Ma
Guang Yao
Yuan Lin
Fali Li
Dezhong Yao 0001
Jingming Hou
Yangsong Zhang
Peng Xu
Geometry of naturalistic object representations in recurrent neural network models of working memory