Portrait of Ian Charest is unavailable

Ian Charest

Associate Academic Member
Assistant Professor, Université de Montréal, Department of Psychology


Ian Charest is a cognitive computational neuroscientist whose general research interests are high-level vision and audition.

He leads the Charest Lab at the Université de Montréal, where he and his team investigate visual recognition in the brain using neuroimaging techniques, such as magneto-electroencephalography (M-EEG) and functional magnetic resonance imaging (fMRI).

Charest’s work makes use of advanced computational modelling and analysis techniques, including machine learning, representational similarity analysis (RSA) and artificial neural networks (ANNs), to better understand human brain function.

Current topics of research in the lab include information processing in the brain during perception, memory, and visual consciousness when recognizing and interpreting natural scenes and visual objects.

The Charest lab is currently funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant to study the interaction between vision and semantics. Charest also holds a Courtois chair in cognitive and computational neuroscience, which is supporting the development of an online platform for the cross-disciplinary investigation of behavioural, computational and neuroimaging datasets.


Decoding face recognition abilities in the human brain
Simon Faghel-Soubeyrand
Meike Ramon
Eva Bamps
Matteo Zoia
Jessica Woodhams
Anne-Raphaelle Richoz
Roberto Caldara
Frédéric Gosselin
Why are some individuals better at recognising faces? Uncovering the neural mechanisms supporting face recognition ability has proven elusiv… (see more)e. To tackle this challenge, we used a multi-modal data-driven approach combining neuroimaging, computational modelling, and behavioural tests. We recorded the high-density electroencephalographic brain activity of individuals with extraordinary face recognition abilities—super-recognisers—and typical recognisers in response to diverse visual stimuli. Using multivariate pattern analyses, we decoded face recognition abilities from 1 second of brain activity with up to 80% accuracy. To better understand the mechanisms subtending this decoding, we compared computations in the brains of our participants with those in artificial neural network models of vision and semantics, as well as with those involved in human judgments of shape and meaning similarity. Compared to typical recognisers, we found stronger associations between early brain computations of super-recognisers and mid-level computations of vision models as well as shape similarity judgments. Moreover, we found stronger associations between late brain representations of super-recognisers and computations of the artificial semantic model as well as meaning similarity judgments. Overall, these results indicate that important individual variations in brain processing, including neural computations extending beyond purely visual processes, support differences in face recognition abilities. They provide the first empirical evidence for an association between semantic computations and face recognition abilities. We believe that such multi-modal data-driven approaches will likely play a critical role in further revealing the complex nature of idiosyncratic face recognition in the human brain. Significance The ability to robustly recognise faces is crucial to our success as social beings. Yet, we still know little about the brain mechanisms allowing some individuals to excel at face recognition. This study builds on a sizeable neural dataset measuring the brain activity of individuals with extraordinary face recognition abilities—super-recognisers—to tackle this challenge. Using state-of-the-art computational methods, we show robust prediction of face recognition abilities in single individuals from a mere second of brain activity, and revealed specific brain computations supporting individual differences in face recognition ability. Doing so, we provide direct empirical evidence for an association between semantic computations and face recognition abilities in the human brain—a key component of prominent face recognition models.
Reconstructing Spatio-Temporal Trajectories of Visual Object Memories in the Human Brain
Julia Lifanov
Benjamin J. Griffiths
Juan Linde-Domingo
Catarina S. Ferreira
Martin Wilson
Stephen D. Mayhew
Maria Wimber
Are vividness judgments in mental imagery correlated with perceptual thresholds?
Clémence Bertrand Pilon
Hugo Delhaye
Vincent Taschereau-Dumouchel
Frédéric Gosselin
Neural representation of occluded objects in visual cortex
Courtney Mansfield
Tim Kietzmann
Jasper JF van den Bosch
Marieke Mur
Nikolaus Kriegeskorte
Fraser Smith
Reconstructing mental images using Bubbles and electroencephalography
Audrey Lamy-Proulx
Jasper JF van den Bosch
Catherine Landry
Peter Brotherwood
Vincent Taschereau-Dumouchel
Frédéric Gosselin
The semantic distance between a linguistic prime and a natural scene target predicts reaction times in a visual search experiment
Katerina Marie Simkova
Jasper JF van den Bosch
Damiano Grignolio
Clayton Hickey
Do visual mental imagery and exteroceptive perception rely on the same mechanisms?
Catherine Landry
Jasper JF van den Bosch
Frédéric Gosselin
Vincent Taschereau-Dumouchel
Improving the accuracy of single-trial fMRI response estimates using GLMsingle
Jacob S Prince
Jan W Kurzawski
John A Pyles
Michael J Tarr
Kendrick Kay
Re-expression of CA1 and entorhinal activity patterns preserves temporal context memory at long timescales
Futing Zou
Wanjia Guo
Emily J. Allen
Yihan Wu
Thomas Naselaris
Kendrick Kay
Brice A. Kuhl
J. Benjamin Hutchinson
Sarah DuBrow
Converging, cross-species evidence indicates that memory for time is supported by hippocampal area CA1 and entorhinal cortex. However, limit… (see more)ed evidence characterizes how these regions preserve temporal memories over long timescales (e.g., months). At long timescales, memoranda may be encountered in multiple temporal contexts, potentially creating interference. Here, using 7T fMRI, we measured CA1 and entorhinal activity patterns as human participants viewed thousands of natural scene images distributed, and repeated, across many months. We show that memory for an image’s original temporal context was predicted by the degree to which CA1/entorhinal activity patterns from the first encounter with an image were re-expressed during re-encounters occurring minutes to months later. Critically, temporal memory signals were dissociable from predictors of recognition confidence, which were carried by distinct medial temporal lobe expressions. These findings suggest that CA1 and entorhinal cortex preserve temporal memories across long timescales by coding for and reinstating temporal context information.
Researcher perspectives on ethics considerations in epigenetics: an international survey
Charles Dupras
Terese Knoppers
Nicole Palmour
Elisabeth Beauchamp
Stamatina Liosi
Reiner Siebert
Alison May Berner
Stephan Beck
Yann Joly
Sleep spindles track cortical learning patterns for memory consolidation
Marit Petzka
Alex Chatburn
George M. Balanos
Bernhard P. Staresina
Sleep spindles track cortical learning patterns for memory consolidation
Marit Petzka
Alex Chatburn
G. Balanos
Bernhard P Staresina