Portrait of Pouya Bashivan is unavailable

Pouya Bashivan

Associate Academic Member
Assistant Professor, McGill University, Department of Physiology

Biography

Pouya Bashivan is an assistant professor in the Department of Physiology at McGill University, a member of McGill’s Integrated Program in Neuroscience, and an associate academic member of Mila – Quebec Artificial Intelligence Institute.

Before joining McGill University, Bashivan was a postdoctoral fellow at Mila, where he worked with Irina Rish and Blake Richards. Prior to that, he was a postdoctoral researcher in the Department of Brain and Cognitive Sciences and at the McGovern Institute for Brain Research at MIT, where he worked with James DiCarlo.

He received his PhD in computer engineering from the University of Memphis in 2016, and his BSc and MSc degrees in electrical and control engineering from K.N. Toosi University of Technology (Tehran).

The goal of research in Bashivan’s lab is to develop neural network models that leverage memory to solve complex tasks. While we often rely on task-performance measures to find improved neural network models and learning algorithms, we also use neural and behavioral measurements from humans and other animal brains to evaluate the similarity of these models to biologically evolved brains. We believe that these additional constraints could expedite the progress towards engineering a human-level artificially intelligent agent.

Current Students

Reza Bayat
Master's Research - Université de Montréal
Principal supervisor :
reza.bayat@mila.quebec
Maxime Daigle
Master's Research - McGill University
daiglema@mila.quebec
Lucas Gomez
Master's Research - McGill University
lucas.gomez@mila.quebec
Xiaoxuan Lei
PhD - McGill University
xiaoxuan@mila.quebec
Motahareh Pourrahimi
PhD - McGill University
Co-supervisor :
motahareh.pourrahimi@mila.quebec
Ali Saheb Pasand
PhD - McGill University
Co-supervisor :
ali.sahebpasand@mila.quebec

Publications

The feature landscape of visual cortex
Rudi Tong
Ronan da Silva
Dongyan Lin
Arna Ghosh
James Wilsenach
Erica Cianfarano
Stuart Trenholm
Understanding computations in the visual system requires a characterization of the distinct feature preferences of neurons in different visu… (see more)al cortical areas. However, we know little about how feature preferences of neurons within a given area relate to that area’s role within the global organization of visual cortex. To address this, we recorded from thousands of neurons across six visual cortical areas in mouse and leveraged generative AI methods combined with closed-loop neuronal recordings to identify each neuron’s visual feature preference. First, we discovered that the mouse’s visual system is globally organized to encode features in a manner invariant to the types of image transformations induced by self-motion. Second, we found differences in the visual feature preferences of each area and that these differences generalized across animals. Finally, we observed that a given area’s collection of preferred stimuli (‘own-stimuli’) drive neurons from the same area more effectively through their dynamic range compared to preferred stimuli from other areas (‘other-stimuli’). As a result, feature preferences of neurons within an area are organized to maximally encode differences among own-stimuli while remaining insensitive to differences among other-stimuli. These results reveal how visual areas work together to efficiently encode information about the external world.
Using modular connectome-based predictive modeling to reveal brain-behavior relationships of individual differences in working memory
Huayi Yang
Junjun Zhang
Zhenlan Jin
Ling Li
Towards Out-of-Distribution Adversarial Robustness
Adam Ibrahim
Charles Guille-Escuret
Adversarial robustness continues to be a major challenge for deep learning. A core issue is that robustness to one type of attack often fail… (see more)s to transfer to other attacks. While prior work establishes a theoretical trade-off in robustness against different
How well do models of visual cortex generalize to out of distribution samples?
Yifei Ren
Learning Robust Kernel Ensembles with Kernel Average Pooling
Adam Ibrahim
Amirozhan Dehghani
Yifei Ren