Portrait de Lune Bellec

Lune Bellec

Membre affilié
Professeur agrégé, Université de Montréal, Département de psychologie
Sujets de recherche
Apprentissage automatique médical
Neurosciences computationnelles

Biographie

Je suis professeur agrégé au Département de psychologie de l'Université de Montréal et chercheur principal du Laboratoire de simulation cérébrale et d'exploration (SIMEXP) de l'Institut universitaire de gériatrie de Montréal (CRIUGM). Je me suis joint récemment à Mila – Institut québécois d'intelligence artificielle en tant que membre affilié et je supervise des étudiant·e·s en informatique (neurosciences computationnelles cognitives) au Département d’informatique et de recherche opérationnelle (DIRO) de l'Université de Montréal. Mes travaux de recherche consistent principalement à entraîner des réseaux neuronaux artificiels afin de reproduire conjointement l'activité cérébrale et le comportement individuel humain. Pour atteindre cet objectif, je dirige un effort intensif de collecte de données individuelles en neuro-imagerie (IRM fonctionnelle, magnétoencéphalographie) : le projet Courtois sur la modélisation neuronale (CNeuroMod). Je suis en outre chercheur-boursier sénior du Fonds de recherche du Québec - Santé (FRQS), membre de l'Alliance québécoise pour l'unification des neurosciences et de l'IA (UNIQUE) et directeur scientifique de l'Unité de neuro-imagerie fonctionnelle (UNF) du CRIUGM.

Étudiants actuels

Doctorat - UdeM
Doctorat - UdeM
Co-superviseur⋅e :
Doctorat - UdeM
Superviseur⋅e principal⋅e :

Publications

Training Compute-Optimal Vision Transformers for Brain Encoding
Sana Ahmadi
Fraçois Paugam
Tristan Glatard
The optimal training of a vision transformer for brain encoding depends on three factors: model size, data size, and computational resources… (voir plus). This study investigates these three pillars, focusing on the effects of data scaling, model scaling, and high-performance computing on brain encoding results. Using VideoGPT to extract efficient spatiotemporal features from videos and training a Ridge model to predict brain activity based on these features, we conducted benchmark experiments with varying data sizes (10k, 100k, 1M, 6M) and different model configurations of GPT-2, including hidden layer dimensions, number of layers, and number of attention heads. We also evaluated the effects of training models with 32-bit vs 16-bit floating point representations. Our results demonstrate that increasing the hidden layer dimensions significantly improves brain encoding performance, as evidenced by higher Pearson correlation coefficients across all subjects. In contrast, the number of attention heads does not have a significant effect on the encoding results. Additionally, increasing the number of layers shows some improvement in brain encoding correlations, but the trend is not as consistent as that observed with hidden layer dimensions. The data scaling results show that larger training datasets lead to improved brain encoding performance, with the highest Pearson correlation coefficients observed for the largest dataset size (6M). These findings highlight that the effects of data scaling are more significant compared to model scaling in enhancing brain encoding performance. Furthermore, we explored the impact of floating-point precision by comparing 32-bit and 16-bit representations. Training with 16-bit precision yielded the same brain encoding accuracy as 32-bit, while reducing training time by 1.17 times, demonstrating its efficiency for high-performance computing tasks.
Training Compute-Optimal Vision Transformers for Brain Encoding
Sana Ahmadi
Fraçois Paugam
Tristan Glatard
The optimal training of a vision transformer for brain encoding depends on three factors: model size, data size, and computational resources… (voir plus). This study investigates these three pillars, focusing on the effects of data scaling, model scaling, and high-performance computing on brain encoding results. Using VideoGPT to extract efficient spatiotemporal features from videos and training a Ridge model to predict brain activity based on these features, we conducted benchmark experiments with varying data sizes (10k, 100k, 1M, 6M) and different model configurations of GPT-2, including hidden layer dimensions, number of layers, and number of attention heads. We also evaluated the effects of training models with 32-bit vs 16-bit floating point representations. Our results demonstrate that increasing the hidden layer dimensions significantly improves brain encoding performance, as evidenced by higher Pearson correlation coefficients across all subjects. In contrast, the number of attention heads does not have a significant effect on the encoding results. Additionally, increasing the number of layers shows some improvement in brain encoding correlations, but the trend is not as consistent as that observed with hidden layer dimensions. The data scaling results show that larger training datasets lead to improved brain encoding performance, with the highest Pearson correlation coefficients observed for the largest dataset size (6M). These findings highlight that the effects of data scaling are more significant compared to model scaling in enhancing brain encoding performance. Furthermore, we explored the impact of floating-point precision by comparing 32-bit and 16-bit representations. Training with 16-bit precision yielded the same brain encoding accuracy as 32-bit, while reducing training time by 1.17 times, demonstrating its efficiency for high-performance computing tasks.
Noise covariance estimation in multi-task high-dimensional linear models
Kai Tan
Gabriel Romon
Nilearn and Big Data Facilitates Transdiagnostic Brain Biomarkers
Hao-Ting Wang
Natasha Clarke
Quentin Dessain
Fraçois Paugam
Scaling up ridge regression for brain encoding in a massive individual fMRI dataset
Sana Ahmadi
Tristan Glatard
A benchmark of individual auto-regressive models in a massive fMRI dataset
Fraçois Paugam
Basile Pinsard
Dense functional magnetic resonance imaging datasets open new avenues to create auto-regressive models of brain activity. Individual idiosyn… (voir plus)crasies are obscured by group models, but can be captured by purely individual models given sufficient amounts of training data. In this study, we compared several deep and shallow individual models on the temporal auto-regression of BOLD time series recorded during a natural video watching task. The best performing models were then analyzed in terms of their data requirements and scaling, subject specificity and the space-time structure of their predicted dynamics. We found the Chebnets, a type of graph convolutional neural network, to be best suited for temporal BOLD auto-regression, closely followed by linear models. Chebnets demonstrated an increase in performance with increasing amounts of data, with no complete saturation at 9 h of training data. Good generalization to other kinds of video stimuli and to resting state data marked the Chebnets’ ability to capture intrinsic brain dynamics rather than only stimulus-specific autocorrelation patterns. Significant subject specificity was found at short prediction time lags. The Chebnets were found to capture lower frequencies at longer prediction time lags, and the spatial correlations in predicted dynamics were found to match traditional functional connectivity networks. Overall, these results demonstrate that large individual fMRI datasets can be used to efficiently train purely individual auto-regressive models of brain activity, and that massive amounts of individual data are required to do so. The excellent performance of the Chebnets likely reflects their ability to combine spatial and temporal interactions on large time scales at a low complexity cost. The non-linearities of the models did not appear as a key advantage. In fact, surprisingly, linear versions of the Chebnets appeared to outperform the original nonlinear ones. Individual temporal auto-regressive models have the potential to improve the predictability of the BOLD signal. This study is based on a massive, publicly-available dataset, which can serve for future benchmarks of individual auto-regressive modeling.
Brain decoding of the Human Connectome Project tasks in a dense individual fMRI dataset
Shima Rastegarnia
Marie St-Laurent
Elizabeth DuPre
Basile Pinsard
Behavioral Imitation with Artificial Neural Networks Leads to Personalized Models of Brain Dynamics During Videogame Play
Anirudha Kemtur
Fraçois Paugam
Basile Pinsard
Yann Harel
Pravish Sainath
Maximilien Le Clei
Julie Boyle
Artificial Neural networks (ANN) trained on complex tasks are increasingly used in neuroscience to model brain dynamics, a process called br… (voir plus)ain encoding. Videogames have been extensively studied in the field of artificial intelligence, but have hardly been used yet for brain encoding. Videogames provide a promising framework to understand brain activity in a rich, engaging, and active environment. A major challenge raised by complex videogames is that individual behavior is highly variable across subjects, and we hypothesized that ANNs need to account for subject-specific behavior in order to properly capture brain dynamics. In this study, we used ANNs to model functional magnetic resonance imaging (fMRI) and behavioral gameplay data, both collected while subjects played the Shinobi III videogame. Using imitation learning, we trained an ANN to play the game while closely replicating the unique gameplay style of individual participants. We found that hidden layers of our imitation learning model successfully encoded task-relevant neural representations, and predicted individual brain dynamics with higher accuracy than models trained on other subjects’ gameplay or control models. The highest correlations between layer activations and brain signals were observed in biologically plausible brain areas, i.e. somatosensory, attention, and visual networks. Our results demonstrate that combining imitation learning, brain imaging, and videogames can allow us to model complex individual brain patterns derived from decision making in a rich, complex environment.
Open design of a reproducible videogame controller for MRI and MEG
Yann Harel
André Cyr
Julie Boyle
Basile Pinsard
Jeremy Bernard
Marie-France Fourcade
Himanshu Aggarwal
Ana Fernanda Ponce
Bertrand Thirion
The Canadian Open Neuroscience Platform—An open science framework for the neuroscience community
Rachel J. Harding
Patrick Bermudez
Alexander Bernier
Michael Beauvais
Sean Hill
Bartha M. Knoppers
Agah Karakuzu
Paul Pavlidis
Jean-Baptiste Poline
Jane Roskams
Nikola Stikov
Jessica Stone
Stephen Strother
Conp Consortium
Alan C. Evans
The Canadian Open Neuroscience Platform (CONP) takes a multifaceted approach to enabling open neuroscience, aiming to make research, data, a… (voir plus)nd tools accessible to everyone, with the ultimate objective of accelerating discovery. Its core infrastructure is the CONP Portal, a repository with a decentralized design, where datasets and analysis tools across disparate platforms can be browsed, searched, accessed, and shared in accordance with FAIR principles. Another key piece of CONP infrastructure is NeuroLibre, a preprint server capable of creating and hosting executable and fully reproducible scientific publications that embed text, figures, and code. As part of its holistic approach, the CONP has also constructed frameworks and guidance for ethics and data governance, provided support and developed resources to help train the next generation of neuroscientists, and has fostered and grown an engaged community through outreach and communications. In this manuscript, we provide a high-level overview of this multipronged platform and its vision of lowering the barriers to the practice of open neuroscience and yielding the associated benefits for both individual researchers and the wider community.
A reproducible benchmark of resting-state fMRI denoising strategies using fMRIPrep and Nilearn
Hao-Ting Wang
Steven L. Meisler
Hanad Sharmarke
Natasha Clarke
Nicolas Gensollen
Christopher J Markiewicz
Fraçois Paugam
Bertrand Thirion
Reducing contributions from non-neuronal sources is a crucial step in functional magnetic resonance imaging (fMRI) analyses. Many viable str… (voir plus)ategies for denoising fMRI are used in the literature, and practitioners rely on denoising benchmarks for guidance in the selection of an appropriate choice for their study. However, fMRI denoising software is an ever-evolving field, and the benchmarks can quickly become obsolete as the techniques or implementations change. In this work, we present a fully reproducible denoising benchmark featuring a range of denoising strategies and evaluation metrics, built primarily on the fMRIPrep and Nilearn software packages. We apply this reproducible benchmark to investigate the robustness of the conclusions across two different datasets and two versions of fMRIPrep. The majority of benchmark results were consistent with prior literature. Scrubbing, a technique which excludes time points with excessive motion, combined with global signal regression, is generally effective at noise removal. Scrubbing however disrupts the continuous sampling of brain images and is incompatible with some statistical analyses, e.g. auto-regressive modeling. In this case, a simple strategy using motion parameters, average activity in select brain compartments, and global signal regression should be preferred. Importantly, we found that certain denoising strategies behave inconsistently across datasets and/or versions of fMRIPrep, or had a different behavior than in previously published benchmarks, especially ICA-AROMA. These results demonstrate that a reproducible denoising benchmark can effectively assess the robustness of conclusions across multiple datasets and software versions. Technologies such as BIDS-App, the Jupyter Book and Neurolibre provided the infrastructure to publish the metadata and report figures. Readers can reproduce the report figures beyond the ones reported in the published manuscript. With the denoising benchmark, we hope to provide useful guidelines for the community, and that our software infrastructure will facilitate continued development as the state-of-the-art advances.
Functional connectivity subtypes associate robustly with ASD diagnosis
S. Urchs
Angela Tam
Pierre Orban
C. Moreau
Yassine Benhajali
Hien Duy Nguyen
Alan C. Evans
Our understanding of the changes in functional brain organization in autism is hampered by the extensive heterogeneity that characterizes th… (voir plus)is neurodevelopmental disorder. Data driven clustering offers a straightforward way to decompose autism heterogeneity into subtypes of connectivity and promises an unbiased framework to investigate behavioral symptoms and causative genetic factors. Yet, the robustness and generalizability of functional connectivity subtypes is unknown. Here, we show that a simple hierarchical cluster analysis can robustly relate a given individual and brain network to a connectivity subtype, but that continuous assignments are more robust than discrete ones. We also found that functional connectivity subtypes are moderately associated with the clinical diagnosis of autism, and these associations generalize to independent replication data. We explored systematically 18 different brain networks as we expected them to associate with different behavioral profiles as well as different key regions. Contrary to this prediction, autism functional connectivity subtypes converged on a common topography across different networks, consistent with a compression of the primary gradient of functional brain organization, as previously reported in the literature. Our results support the use of data driven clustering as a reliable data dimensionality reduction technique, where any given dimension only associates moderately with clinical manifestations.