Portrait de Ian Charest n'est pas disponible

Ian Charest

Membre académique associé
Professeur adjoint, Université de Montréal, Département de psychologie
Sujets de recherche
Apprentissage profond
Neurosciences computationnelles
Traitement du langage naturel
Vision par ordinateur

Biographie

Le professeur Ian Charest est un chercheur en neuroscience cognitive et computationnelle qui s'intéresse surtout à la vision et à l'audition de haut niveau. Il dirige le laboratoire Charest à l'Université de Montréal, où son équipe et lui étudient la reconnaissance visuelle dans le cerveau à l'aide de techniques de neuro-imagerie telles que la magnétoélectroencéphalographie (M-EEG) et l'imagerie par résonance magnétique fonctionnelle (IRMf). Son travail utilise des techniques avancées de modélisation et d'analyse, notamment l'apprentissage automatique, l'analyse de similarité représentationnelle (RSA) et les réseaux de neurones artificiels (ANN) pour mieux comprendre le fonctionnement du cerveau humain. Les thèmes de recherche actuels du laboratoire comprennent le traitement de l'information dans le cerveau pendant la perception, ainsi que la mémoire et la conscience visuelles lors de la reconnaissance et de l'interprétation de scènes naturelles et d'objets visuels. Le laboratoire est actuellement financé par une subvention à la découverte du Conseil de recherches en sciences naturelles et en génie (CRSNG) pour étudier l'interaction entre la vision et la sémantique. De plus, le professeur Charest dirige une chaire Courtois en neuroscience cognitive et computationnelle qui vise à développer une plateforme en ligne pour la recherche interdisciplinaire de données comportementales, computationnelles et de neuro-imagerie.

Publications

Reconstructing Spatio-Temporal Trajectories of Visual Object Memories in the Human Brain
Julia Lifanov
Benjamin J. Griffiths
Juan Linde-Domingo
Catarina S. Ferreira
Martin Wilson
Stephen D. Mayhew
Maria Wimber
Neural computations in prosopagnosia
Simon Faghel-Soubeyrand
Anne-Raphaelle Richoz
Delphine Waeber
Jessica Woodhams
Roberto Caldara
Frédéric Gosselin
We aimed to identify neural computations underlying the loss of face identification ability by modelling the brain activity of brain-lesione… (voir plus)d patient PS, a well-documented case of acquired pure prosopagnosia. We collected a large dataset of high-density electrophysiological (EEG) recordings from PS and neurotypicals while they completed a one-back task on a stream of face, object, animal and scene images. We found reduced neural decoding of face identity around the N170 window in PS, and conjointly revealed normal non-face identification in this patient. We used Representational Similarity Analysis (RSA) to correlate human EEG representations with those of deep neural network (DNN) models of vision and caption-level semantics, offering a window into the neural computations at play in patient PS’s deficits. Brain representational dissimilarity matrices (RDMs) were computed for each participant at 4 ms steps using cross-validated classifiers. PS’s brain RDMs showed significant reliability across sessions, indicating meaningful measurements of brain representations with RSA even in the presence of significant lesions. Crucially, computational analyses were able to reveal PS’s representational deficits in high-level visual and semantic brain computations. Such multi-modal data-driven characterisations of prosopagnosia highlight the complex nature of processes contributing to face recognition in the human brain. Highlights We assess the neural computations in the prosopagnosic patient PS using EEG, RSA, and deep neural networks Neural dynamics of brain-lesioned PS are reliably captured using RSA Neural decoding shows normal evidence for non-face individuation in PS Neural decoding shows abnormal neural evidence for face individuation in PS PS shows impaired high-level visual and semantic neural computations
Decoding face recognition abilities in the human brain
Simon Faghel-Soubeyrand
Meike Ramon
Eva Bamps
Matteo Zoia
Jessica Woodhams
Anne-Raphaelle Richoz
Roberto Caldara
Frédéric Gosselin
Why are some individuals better at recognising faces? Uncovering the neural mechanisms supporting face recognition ability has proven elusiv… (voir plus)e. To tackle this challenge, we used a multi-modal data-driven approach combining neuroimaging, computational modelling, and behavioural tests. We recorded the high-density electroencephalographic brain activity of individuals with extraordinary face recognition abilities—super-recognisers—and typical recognisers in response to diverse visual stimuli. Using multivariate pattern analyses, we decoded face recognition abilities from 1 second of brain activity with up to 80% accuracy. To better understand the mechanisms subtending this decoding, we compared computations in the brains of our participants with those in artificial neural network models of vision and semantics, as well as with those involved in human judgments of shape and meaning similarity. Compared to typical recognisers, we found stronger associations between early brain computations of super-recognisers and mid-level computations of vision models as well as shape similarity judgments. Moreover, we found stronger associations between late brain representations of super-recognisers and computations of the artificial semantic model as well as meaning similarity judgments. Overall, these results indicate that important individual variations in brain processing, including neural computations extending beyond purely visual processes, support differences in face recognition abilities. They provide the first empirical evidence for an association between semantic computations and face recognition abilities. We believe that such multi-modal data-driven approaches will likely play a critical role in further revealing the complex nature of idiosyncratic face recognition in the human brain. Significance The ability to robustly recognise faces is crucial to our success as social beings. Yet, we still know little about the brain mechanisms allowing some individuals to excel at face recognition. This study builds on a sizeable neural dataset measuring the brain activity of individuals with extraordinary face recognition abilities—super-recognisers—to tackle this challenge. Using state-of-the-art computational methods, we show robust prediction of face recognition abilities in single individuals from a mere second of brain activity, and revealed specific brain computations supporting individual differences in face recognition ability. Doing so, we provide direct empirical evidence for an association between semantic computations and face recognition abilities in the human brain—a key component of prominent face recognition models.
Are vividness judgments in mental imagery correlated with perceptual thresholds?
Clémence Bertrand Pilon
Hugo Delhaye
Vincent Taschereau-Dumouchel
Frédéric Gosselin
Neural representation of occluded objects in visual cortex
Courtney Mansfield
Tim Kietzmann
Jasper JF van den Bosch
Marieke Mur
Nikolaus Kriegeskorte
Fraser Smith
Reconstructing mental images using Bubbles and electroencephalography
Audrey Lamy-Proulx
Jasper JF van den Bosch
Catherine Landry
Peter Brotherwood
Vincent Taschereau-Dumouchel
Frédéric Gosselin
The semantic distance between a linguistic prime and a natural scene target predicts reaction times in a visual search experiment
Katerina Marie Simkova
Jasper JF van den Bosch
Damiano Grignolio
Clayton Hickey
Do visual mental imagery and exteroceptive perception rely on the same mechanisms?
Catherine Landry
Jasper JF van den Bosch
Frédéric Gosselin
Vincent Taschereau-Dumouchel
Improving the accuracy of single-trial fMRI response estimates using GLMsingle
Jacob S Prince
Jan W Kurzawski
John A Pyles
Michael J Tarr
Kendrick Kay
Re-expression of CA1 and entorhinal activity patterns preserves temporal context memory at long timescales
Futing Zou
Wanjia Guo
Emily J. Allen
Yihan Wu
Thomas Naselaris
Kendrick Kay
Brice A. Kuhl
J. Benjamin Hutchinson
Sarah DuBrow
Converging, cross-species evidence indicates that memory for time is supported by hippocampal area CA1 and entorhinal cortex. However, limit… (voir plus)ed evidence characterizes how these regions preserve temporal memories over long timescales (e.g., months). At long timescales, memoranda may be encountered in multiple temporal contexts, potentially creating interference. Here, using 7T fMRI, we measured CA1 and entorhinal activity patterns as human participants viewed thousands of natural scene images distributed, and repeated, across many months. We show that memory for an image’s original temporal context was predicted by the degree to which CA1/entorhinal activity patterns from the first encounter with an image were re-expressed during re-encounters occurring minutes to months later. Critically, temporal memory signals were dissociable from predictors of recognition confidence, which were carried by distinct medial temporal lobe expressions. These findings suggest that CA1 and entorhinal cortex preserve temporal memories across long timescales by coding for and reinstating temporal context information.
Researcher perspectives on ethics considerations in epigenetics: an international survey
Charles Dupras
Terese Knoppers
Nicole Palmour
Elisabeth Beauchamp
Stamatina Liosi
Reiner Siebert
Alison May Berner
Stephan Beck
Yann Joly
Sleep spindles track cortical learning patterns for memory consolidation
Marit Petzka
Alex Chatburn
George M. Balanos
Bernhard P. Staresina