Portrait de Maximilien Le Clei

Maximilien Le Clei

Doctorat - UdeM
Superviseur⋅e principal⋅e
Sujets de recherche
Apprentissage profond

Publications

Generational Information Transfer with Neuroevolution on Control Tasks
Stav Bar-Sheshet
Pierre Bellec
Lune P Bellec
Behavioral Imitation with Artificial Neural Networks Leads to Personalized Models of Brain Dynamics During Videogame Play
Anirudha Kemtur
Basile Pinsard
Yann Harel
Julie Boyle
Pierre Bellec
Videogames provide a promising framework to understand brain activity in a rich, engaging, and active environment, in contrast to mostly pas… (voir plus)sive tasks currently dominating the field, such as image viewing. Analyzing videogames neuroimaging data is however challenging, and relies on time-intensive manual annotations of game events, based on somewhat arbitrary rules. Here, we introduce an innovative approach using Artificial Neural networks (ANN) and brain encoding techniques to generate activation maps associated with videogame behaviour using functional magnetic resonance imaging (fMRI). As individual behavior is highly variable across subjects in complex environments, we hypothesized that ANNs need to account for subject-specific behavior to properly capture brain dynamics. In this study, we used data collected while subjects played Shinobi III: Return of the Ninja Master (Sega, 1993), an action-platformer videogame. Using imitation learning, we trained an ANN to play the game while closely replicating the unique gameplay style of individual participants. We found that hidden layers of our imitation learning model successfully encoded task-relevant neural representations, and predicted individual brain dynamics with higher accuracy than models trained on other subjects’ gameplay. Individual-specific models also outperformed a number of baselines to predict brain activity, such as pixel inputs, or button presses. The highest correlations between layer activations and brain signals were observed in biologically plausible brain areas, i.e. somatosensory, attention, and visual networks. Our results demonstrate that training subject-specific ANNs can successfully uncover brain correlates of complex behaviour. This new method combining imitation learning, brain imaging, and videogames opens new research avenues to study decision-making and psychomotor task solving in naturalistic and complex environments.
Alignment of auditory artificial networks with massive individual fMRI brain data leads to generalisable improvements in brain encoding and downstream tasks
Maelle Freteault
Loic Tetrel
Lune P Bellec
Nicolas Farrugia
Artificial neural networks trained in the field of artificial intelligence (AI) have emerged as key tools to model brain processes, sparking… (voir plus) the idea of aligning network representations with brain dynamics to enhance performance on AI tasks. While this concept has gained support in the visual domain, we investigate here the feasibility of creating auditory artificial neural models directly aligned with individual brain activity. This objective raises major computational challenges, as models have to be trained directly with brain data, which is typically collected at a much smaller scale than data used to train AI models. We aimed to answer two key questions: (1) Can brain alignment of auditory models lead to improved brain encoding for novel, previously unseen stimuli? (2) Can brain alignment lead to generalisable representations of auditory signals that are useful for solving a variety of complex auditory tasks? To answer these questions, we relied on two massive datasets: a deep phenotyping dataset from the Courtois neuronal modelling project, where six subjects watched four seasons (36 hours) of the Friends TV series in functional magnetic resonance imaging and the HEAR benchmark, a large battery of downstream auditory tasks. We fine-tuned SoundNet, a small pretrained convolutional neural network with ∼2.5M parameters. Aligning SoundNet with brain data from three seasons of Friends led to substantial improvement in brain encoding in the fourth season, extending beyond auditory and visual cortices. We also observed consistent performance gains on the HEAR benchmark, particularly for tasks with limited training data, where brain-aligned models performed comparably to the best-performing models regardless of size. We finally compared individual and group models, finding that individual models often matched or outperformed group models in both brain encoding and downstream task performance, highlighting the data efficiency of fine-tuning with individual brain data. Our results demonstrate the feasibility of aligning artificial neural network representations with individual brain activity during auditory processing, and suggest that this alignment is particularly beneficial for tasks with limited training data. Future research is needed to establish whether larger models can achieve even better performance and whether the observed gains extend to other tasks, particularly in the context of few shot learning.