Publications

The Past, Present, and Future of the Brain Imaging Data Structure (BIDS)
Russell A. Poldrack
Honey, Christopher J.
Appelhoff, Stefan
Yoni K. Ashar
Tibor Auer
Flandin, Guillaume
Bansal, Shashank
Beltrachini, Leandro
Benar, Christian G.
C. Bénar
Bertazzoli, Giacomo
Bhogawar, Suyash
Ross W. Blair
Bortoletto, Marta
Mathieu Boudreau
Teon L. Brooks
Kincses, Balint
Castelli, Filippo Maria
Patricia Clement
Cohen, Alexander L. … (see 100 more)
Cohen-Adad, Julien
D'Ambrosio, Sasha
de Hollander, Gilles
de la Iglesia-Vayá, María
Alejandro de la Vega
Arnaud Delorme
Devinsky, Orrin
Draschkow, Dejan
Duff, Eugene Paul
E. Duff
DuPre, Elizabeth
Eric Earl
Esteban, Oscar
Feingold, Franklin W.
Ganz, Melanie
Anthony Galassi
Giuseppe Gallitto
Ganz, Melanie
Rizzo, Gaia
James Gholam
Sulagna Dia Ghosh
Ghosh, Satrajit S.
Guidali, Giacomo
Ashley G. Gillman
Padraig Gleeson
Alexandre Gramfort
Guay, Samuel
Giacomel, Alessio
Yaroslav O. Halchenko
Bansal, Shashank
Hardcastle, Nell
Herholz, Peer
Hermes, Dora
Honey, Christopher J.
C. Honey
Innis, Robert B.
Torabian, Sajjad
Jahn, Andrew
Karakuzu, Agah
David B. Keator
Kiar, Gregory
Calhoun, Vince D.
Laird, Angela R.
Lau, Jonathan C.
Baillet, Sylvain
Jon Haitz Legarreta
Li, Adam
Li, Xiangrui
Love, Bradley C.
Lu, Hanzhang
Norgaard, Martin
Camille Maumet
Mazzamuto, Giacomo
Steven L. Meisler
Mikkelsen, Mark
Mutsaerts, Henk
Thomas, Adam G.
Nikolaidis, Aki
Nilsonne, Gustav
Niso, Guiomar
Marcantoni, Eleonora
Thomas, Adam G.
Oostenveld, Robert
Ort, Eduard
Park, Patrick J.
Mateusz Pawlik
Ritter, Petra
Pestilli, Franco
Jan Petr
Honey, Christopher J.
Poline, Jean-Baptiste
Pollonini, Luca
Pradeep R. Raamana
Raamana, Pradeep Reddy
Petra Ritter
Gau, Rémi
Robbins, Kay A.
Cohen, Alexander L.
Benar, Christian G.
Ariel Rokem
Chris Rorden
Cohen, Alexander L.
Saborit-Torres, Jose Manuel
Guay, Samuel
Michael Schirner
Smith, Robert E.
Tamas Spisak
Sprenger, Julia
Swann, Nicole C.
Martin Szinte
Takerkart, Sylvain
Bertrand Thirion
Adam G. Thomas
Torabian, Sajjad
Varoquaux, Gael
Love, Bradley C.
Julius Welzel
Martin Wilson
Yarkoni, Tal
Krzysztof J. Gorgolewski
The Brain Imaging Data Structure (BIDS) is a community-driven standard for the organization of data and metadata from a growing range of neu… (see more)roscience modalities. This paper is meant as a history of how the standard has developed and grown over time. We outline the principles behind the project, the mechanisms by which it has been extended, and some of the challenges being addressed as it evolves. We also discuss the lessons learned through the project, with the aim of enabling researchers in other domains to learn from the success of BIDS.
Leveraging ChatGPT to Democratize and Decolonize Global Surgery: Large Language Models for Small Healthcare Budgets
Local field potentials in human motor and non-motor brain areas encode the direction of upcoming movements: An intracerebral EEG classification study
Etienne Combrisson
Franck Di Rienzo
Anne-Lise Saive
Marcela Perrone-Bertolotti
Juan LP Soto
Philippe Kahane
Jean-Philippe Lachaux
Aymeric Guillot
Karim Jerbi CoCo Lab
Neural Causal Structure Discovery from Interventions
Nan Rosemary Ke
Bernhard Schölkopf
Michael Curtis Mozer
Christopher Pal
Recent promising results have generated a surge of interest in continuous optimization methods for causal discovery from observational data.… (see more) However, there are theoretical limitations on the identifiability of underlying structures obtained solely from observational data. Interventional data, on the other hand, provides richer information about the underlying data-generating process. Nevertheless, extending and applying methods designed for observational data to include interventions is a challenging problem. To address this issue, we propose a general framework based on neural networks to develop models that incorporate both observational and interventional data. Notably, our method can handle the challenging and realistic scenario where the identity of the intervened upon variable is unknown. We evaluate our proposed approach in the context of graph recovery, both de novo and from a partially-known edge set. Our method achieves strong benchmark results on various structure learning tasks, including structure recovery of synthetic graphs as well as standard graphs from the Bayesian Network Repository.
Let Coarse-Grained Resources Be Shared: Mapping Entire Neural Networks on FPGAs
Leveraging World Model Disentanglement in Value-Based Multi-Agent Reinforcement Learning
Zhizun Wang
In this paper, we propose a novel model-based multi-agent reinforcement learning approach named Value Decomposition Framework with Disentang… (see more)led World Model to address the challenge of achieving a common goal of multiple agents interacting in the same environment with reduced sample complexity. Due to scalability and non-stationarity problems posed by multi-agent systems, model-free methods rely on a considerable number of samples for training. In contrast, we use a modularized world model, composed of action-conditioned, action-free, and static branches, to unravel the environment dynamics and produce imagined outcomes based on past experience, without sampling directly from the real environment. We employ variational auto-encoders and variational graph auto-encoders to learn the latent representations for the world model, which is merged with a value-based framework to predict the joint action-value function and optimize the overall training objective. We present experimental results in Easy, Hard, and Super-Hard StarCraft II micro-management challenges to demonstrate that our method achieves high sample efficiency and exhibits superior performance in defeating the enemy armies compared to other baselines.
Alignment of auditory artificial networks with massive individual fMRI brain data leads to generalisable improvements in brain encoding and downstream tasks
Maelle Freteault
Loic Tetrel
Lune P Bellec
Nicolas Farrugia
Artificial neural networks trained in the field of artificial intelligence (AI) have emerged as key tools to model brain processes, sparking… (see more) the idea of aligning network representations with brain dynamics to enhance performance on AI tasks. While this concept has gained support in the visual domain, we investigate here the feasibility of creating auditory artificial neural models directly aligned with individual brain activity. This objective raises major computational challenges, as models have to be trained directly with brain data, which is typically collected at a much smaller scale than data used to train AI models. We aimed to answer two key questions: (1) Can brain alignment of auditory models lead to improved brain encoding for novel, previously unseen stimuli? (2) Can brain alignment lead to generalisable representations of auditory signals that are useful for solving a variety of complex auditory tasks? To answer these questions, we relied on two massive datasets: a deep phenotyping dataset from the Courtois neuronal modelling project, where six subjects watched four seasons (36 hours) of the Friends TV series in functional magnetic resonance imaging and the HEAR benchmark, a large battery of downstream auditory tasks. We fine-tuned SoundNet, a small pretrained convolutional neural network with ∼2.5M parameters. Aligning SoundNet with brain data from three seasons of Friends led to substantial improvement in brain encoding in the fourth season, extending beyond auditory and visual cortices. We also observed consistent performance gains on the HEAR benchmark, particularly for tasks with limited training data, where brain-aligned models performed comparably to the best-performing models regardless of size. We finally compared individual and group models, finding that individual models often matched or outperformed group models in both brain encoding and downstream task performance, highlighting the data efficiency of fine-tuning with individual brain data. Our results demonstrate the feasibility of aligning artificial neural network representations with individual brain activity during auditory processing, and suggest that this alignment is particularly beneficial for tasks with limited training data. Future research is needed to establish whether larger models can achieve even better performance and whether the observed gains extend to other tasks, particularly in the context of few shot learning.
Bridging the Gap Between Target Networks and Functional Regularization
Valentin Thomas
Joseph Marino
Gian Maria Marconi
Rafael Pardinas
Christopher Pal
Mohammad Emtiyaz Khan
Cardiomyocyte orientation recovery at micrometer scale reveals long-axis fiber continuum in heart walls
Drisya Dileep
Tabish A Syed
Tyler FW Sloan
Perundurai S Dhandapany
Minhajuddin Sirajuddin
Coordinated cardiomyocyte contraction drives the mammalian heart to beat and circulate blood. No consensus model of cardiomyocyte geometrica… (see more)l arrangement exists, due to the limited spatial resolution of whole heart imaging methods and the piecemeal nature of studies based on histological sections. By combining microscopy and computer vision, we produced the first‐ever three‐dimensional cardiomyocyte orientation reconstruction across mouse ventricular walls at the micrometer scale, representing a gain of three orders of magnitude in spatial resolution. We recovered a cardiomyocyte arrangement aligned to the long‐axis direction of the outer ventricular walls. This cellular network lies in a thin shell and forms a continuum with longitudinally arranged cardiomyocytes in the inner walls, with a complex geometry at the apex. Our reconstruction methods can be applied at fine spatial scales to further understanding of heart wall electrical function and mechanics, and set the stage for the study of micron‐scale fiber remodeling in heart disease.
Using Multiple Vector Channels Improves E(n)-Equivariant Graph Neural Networks
Sékou-Oumar Kaba
Carmelo Gonzales
Santiago Miret
Breaking Barriers to Creative Expression: Co-Designing and Implementing an Accessible Text-to-Image Interface
Atieh Taheri
Mohammad Izadi
Gururaj Shriram
Shaun Kane
Text-to-image generation models have grown in popularity due to their ability to produce high-quality images from a text prompt. One use for… (see more) this technology is to enable the creation of more accessible art creation software. In this paper, we document the development of an alternative user interface that reduces the typing effort needed to enter image prompts by providing suggestions from a large language model, developed through iterative design and testing within the project team. The results of this testing demonstrate how generative text models can support the accessibility of text-to-image models, enabling users with a range of abilities to create visual art.
Deep reinforcement learning for option pricing and hedging under dynamic expectile risk measures
Option Pricing
Saeed Marzban
Jonathan Yu-Meng Li