Portrait of Lune Bellec

Lune Bellec

Affiliate Member
Associate Professor, Université de Montréal, Department of Psychology
Research Topics
Computational Neuroscience
Medical Machine Learning

Biography

I am an associate professor in the Department of Psychology at Université de Montréal, and a principal investigator at SIMEXP – Laboratory for Brain Simulation and Exploration at the CRIUGM (research centre of the Montréal university geriatrics institute). I recently joined Mila – Quebec Artificial Intelligence Institute as an affiliate member, and I also supervise students in computer science (cognitive computational neuroscience) at the Department of Computer Science and Operations Research (DIRO), Université de Montréal.

My main research interest is training artificial neural networks to jointly mimic individual human brain activity and behaviour. To achieve this goal, I lead an intensive effort in individual data collection in neuroimaging (fMRI, MEG) and the Courtois project on neuronal modelling (CNeuroMod). I am a senior Fonds de recherche du Québec - Santé (FRQS) scholar, a member of UNIQUE (Quebec alliance for unifying neuroscience and AI), and the scientific director of the CRIUGM’s functional neuroimaging unit.

Current Students

Master's Research - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
PhD - Université de Montréal
Co-supervisor :
PhD - Université de Montréal
Principal supervisor :

Publications

Training Compute-Optimal Vision Transformers for Brain Encoding
Sana Ahmadi
Fraçois Paugam
Tristan Glatard
The optimal training of a vision transformer for brain encoding depends on three factors: model size, data size, and computational resources… (see more). This study investigates these three pillars, focusing on the effects of data scaling, model scaling, and high-performance computing on brain encoding results. Using VideoGPT to extract efficient spatiotemporal features from videos and training a Ridge model to predict brain activity based on these features, we conducted benchmark experiments with varying data sizes (10k, 100k, 1M, 6M) and different model configurations of GPT-2, including hidden layer dimensions, number of layers, and number of attention heads. We also evaluated the effects of training models with 32-bit vs 16-bit floating point representations. Our results demonstrate that increasing the hidden layer dimensions significantly improves brain encoding performance, as evidenced by higher Pearson correlation coefficients across all subjects. In contrast, the number of attention heads does not have a significant effect on the encoding results. Additionally, increasing the number of layers shows some improvement in brain encoding correlations, but the trend is not as consistent as that observed with hidden layer dimensions. The data scaling results show that larger training datasets lead to improved brain encoding performance, with the highest Pearson correlation coefficients observed for the largest dataset size (6M). These findings highlight that the effects of data scaling are more significant compared to model scaling in enhancing brain encoding performance. Furthermore, we explored the impact of floating-point precision by comparing 32-bit and 16-bit representations. Training with 16-bit precision yielded the same brain encoding accuracy as 32-bit, while reducing training time by 1.17 times, demonstrating its efficiency for high-performance computing tasks.
Training Compute-Optimal Vision Transformers for Brain Encoding
Sana Ahmadi
Fraçois Paugam
Tristan Glatard
The optimal training of a vision transformer for brain encoding depends on three factors: model size, data size, and computational resources… (see more). This study investigates these three pillars, focusing on the effects of data scaling, model scaling, and high-performance computing on brain encoding results. Using VideoGPT to extract efficient spatiotemporal features from videos and training a Ridge model to predict brain activity based on these features, we conducted benchmark experiments with varying data sizes (10k, 100k, 1M, 6M) and different model configurations of GPT-2, including hidden layer dimensions, number of layers, and number of attention heads. We also evaluated the effects of training models with 32-bit vs 16-bit floating point representations. Our results demonstrate that increasing the hidden layer dimensions significantly improves brain encoding performance, as evidenced by higher Pearson correlation coefficients across all subjects. In contrast, the number of attention heads does not have a significant effect on the encoding results. Additionally, increasing the number of layers shows some improvement in brain encoding correlations, but the trend is not as consistent as that observed with hidden layer dimensions. The data scaling results show that larger training datasets lead to improved brain encoding performance, with the highest Pearson correlation coefficients observed for the largest dataset size (6M). These findings highlight that the effects of data scaling are more significant compared to model scaling in enhancing brain encoding performance. Furthermore, we explored the impact of floating-point precision by comparing 32-bit and 16-bit representations. Training with 16-bit precision yielded the same brain encoding accuracy as 32-bit, while reducing training time by 1.17 times, demonstrating its efficiency for high-performance computing tasks.
Noise covariance estimation in multi-task high-dimensional linear models
Kai Tan
Gabriel Romon
Nilearn and Big Data Facilitates Transdiagnostic Brain Biomarkers
Hao-Ting Wang
Natasha Clarke
Quentin Dessain
Fraçois Paugam
Scaling up ridge regression for brain encoding in a massive individual fMRI dataset
Sana Ahmadi
Tristan Glatard
A benchmark of individual auto-regressive models in a massive fMRI dataset
Fraçois Paugam
Basile Pinsard
Dense functional magnetic resonance imaging datasets open new avenues to create auto-regressive models of brain activity. Individual idiosyn… (see more)crasies are obscured by group models, but can be captured by purely individual models given sufficient amounts of training data. In this study, we compared several deep and shallow individual models on the temporal auto-regression of BOLD time series recorded during a natural video watching task. The best performing models were then analyzed in terms of their data requirements and scaling, subject specificity and the space-time structure of their predicted dynamics. We found the Chebnets, a type of graph convolutional neural network, to be best suited for temporal BOLD auto-regression, closely followed by linear models. Chebnets demonstrated an increase in performance with increasing amounts of data, with no complete saturation at 9 h of training data. Good generalization to other kinds of video stimuli and to resting state data marked the Chebnets’ ability to capture intrinsic brain dynamics rather than only stimulus-specific autocorrelation patterns. Significant subject specificity was found at short prediction time lags. The Chebnets were found to capture lower frequencies at longer prediction time lags, and the spatial correlations in predicted dynamics were found to match traditional functional connectivity networks. Overall, these results demonstrate that large individual fMRI datasets can be used to efficiently train purely individual auto-regressive models of brain activity, and that massive amounts of individual data are required to do so. The excellent performance of the Chebnets likely reflects their ability to combine spatial and temporal interactions on large time scales at a low complexity cost. The non-linearities of the models did not appear as a key advantage. In fact, surprisingly, linear versions of the Chebnets appeared to outperform the original nonlinear ones. Individual temporal auto-regressive models have the potential to improve the predictability of the BOLD signal. This study is based on a massive, publicly-available dataset, which can serve for future benchmarks of individual auto-regressive modeling.
Brain decoding of the Human Connectome Project tasks in a dense individual fMRI dataset
Shima Rastegarnia
Marie St-Laurent
Elizabeth DuPre
Basile Pinsard
Behavioral Imitation with Artificial Neural Networks Leads to Personalized Models of Brain Dynamics During Videogame Play
Anirudha Kemtur
Fraçois Paugam
Basile Pinsard
Yann Harel
Pravish Sainath
Maximilien Le Clei
Julie Boyle
Artificial Neural networks (ANN) trained on complex tasks are increasingly used in neuroscience to model brain dynamics, a process called br… (see more)ain encoding. Videogames have been extensively studied in the field of artificial intelligence, but have hardly been used yet for brain encoding. Videogames provide a promising framework to understand brain activity in a rich, engaging, and active environment. A major challenge raised by complex videogames is that individual behavior is highly variable across subjects, and we hypothesized that ANNs need to account for subject-specific behavior in order to properly capture brain dynamics. In this study, we used ANNs to model functional magnetic resonance imaging (fMRI) and behavioral gameplay data, both collected while subjects played the Shinobi III videogame. Using imitation learning, we trained an ANN to play the game while closely replicating the unique gameplay style of individual participants. We found that hidden layers of our imitation learning model successfully encoded task-relevant neural representations, and predicted individual brain dynamics with higher accuracy than models trained on other subjects’ gameplay or control models. The highest correlations between layer activations and brain signals were observed in biologically plausible brain areas, i.e. somatosensory, attention, and visual networks. Our results demonstrate that combining imitation learning, brain imaging, and videogames can allow us to model complex individual brain patterns derived from decision making in a rich, complex environment.
Open design of a reproducible videogame controller for MRI and MEG
Yann Harel
André Cyr
Julie Boyle
Basile Pinsard
Jeremy Bernard
Marie-France Fourcade
Himanshu Aggarwal
Ana Fernanda Ponce
Bertrand Thirion
The Canadian Open Neuroscience Platform—An open science framework for the neuroscience community
Rachel J. Harding
Patrick Bermudez
Alexander Bernier
Michael Beauvais
Sean Hill
Bartha M. Knoppers
Agah Karakuzu
Paul Pavlidis
Jean-Baptiste Poline
Jane Roskams
Nikola Stikov
Jessica Stone
Stephen Strother
Conp Consortium
Alan C. Evans
The Canadian Open Neuroscience Platform (CONP) takes a multifaceted approach to enabling open neuroscience, aiming to make research, data, a… (see more)nd tools accessible to everyone, with the ultimate objective of accelerating discovery. Its core infrastructure is the CONP Portal, a repository with a decentralized design, where datasets and analysis tools across disparate platforms can be browsed, searched, accessed, and shared in accordance with FAIR principles. Another key piece of CONP infrastructure is NeuroLibre, a preprint server capable of creating and hosting executable and fully reproducible scientific publications that embed text, figures, and code. As part of its holistic approach, the CONP has also constructed frameworks and guidance for ethics and data governance, provided support and developed resources to help train the next generation of neuroscientists, and has fostered and grown an engaged community through outreach and communications. In this manuscript, we provide a high-level overview of this multipronged platform and its vision of lowering the barriers to the practice of open neuroscience and yielding the associated benefits for both individual researchers and the wider community.
A reproducible benchmark of resting-state fMRI denoising strategies using fMRIPrep and Nilearn
Hao-Ting Wang
Steven L. Meisler
Hanad Sharmarke
Natasha Clarke
Nicolas Gensollen
Christopher J Markiewicz
Fraçois Paugam
Bertrand Thirion
Reducing contributions from non-neuronal sources is a crucial step in functional magnetic resonance imaging (fMRI) analyses. Many viable str… (see more)ategies for denoising fMRI are used in the literature, and practitioners rely on denoising benchmarks for guidance in the selection of an appropriate choice for their study. However, fMRI denoising software is an ever-evolving field, and the benchmarks can quickly become obsolete as the techniques or implementations change. In this work, we present a fully reproducible denoising benchmark featuring a range of denoising strategies and evaluation metrics, built primarily on the fMRIPrep and Nilearn software packages. We apply this reproducible benchmark to investigate the robustness of the conclusions across two different datasets and two versions of fMRIPrep. The majority of benchmark results were consistent with prior literature. Scrubbing, a technique which excludes time points with excessive motion, combined with global signal regression, is generally effective at noise removal. Scrubbing however disrupts the continuous sampling of brain images and is incompatible with some statistical analyses, e.g. auto-regressive modeling. In this case, a simple strategy using motion parameters, average activity in select brain compartments, and global signal regression should be preferred. Importantly, we found that certain denoising strategies behave inconsistently across datasets and/or versions of fMRIPrep, or had a different behavior than in previously published benchmarks, especially ICA-AROMA. These results demonstrate that a reproducible denoising benchmark can effectively assess the robustness of conclusions across multiple datasets and software versions. Technologies such as BIDS-App, the Jupyter Book and Neurolibre provided the infrastructure to publish the metadata and report figures. Readers can reproduce the report figures beyond the ones reported in the published manuscript. With the denoising benchmark, we hope to provide useful guidelines for the community, and that our software infrastructure will facilitate continued development as the state-of-the-art advances.
Functional connectivity subtypes associate robustly with ASD diagnosis
Sebastian G. W. Urchs
Angela Tam
Pierre Orban
Clara A. Moreau
Yassine Benhajali
Hien Duy Nguyen
Alan C. Evans
Our understanding of the changes in functional brain organization in autism is hampered by the extensive heterogeneity that characterizes th… (see more)is neurodevelopmental disorder. Data driven clustering offers a straightforward way to decompose autism heterogeneity into subtypes of connectivity and promises an unbiased framework to investigate behavioral symptoms and causative genetic factors. Yet, the robustness and generalizability of functional connectivity subtypes is unknown. Here, we show that a simple hierarchical cluster analysis can robustly relate a given individual and brain network to a connectivity subtype, but that continuous assignments are more robust than discrete ones. We also found that functional connectivity subtypes are moderately associated with the clinical diagnosis of autism, and these associations generalize to independent replication data. We explored systematically 18 different brain networks as we expected them to associate with different behavioral profiles as well as different key regions. Contrary to this prediction, autism functional connectivity subtypes converged on a common topography across different networks, consistent with a compression of the primary gradient of functional brain organization, as previously reported in the literature. Our results support the use of data driven clustering as a reliable data dimensionality reduction technique, where any given dimension only associates moderately with clinical manifestations.