Publications

The Past, Present, and Future of the Brain Imaging Data Structure (BIDS)
Russell A. Poldrack
Christopher J. Markiewicz
Stefan Appelhoff
Yoni K. Ashar
Tibor Auer
Sylvain Baillet
Shashank Bansal
Leandro Beltrachini
Christian G. Benar
C. Bénar
Giacomo Bertazzoli
Suyash Bhogawar
Ross W. Blair
Marta Bortoletto
Mathieu Boudreau
Teon L. Brooks
Vince D. Calhoun
Filippo Maria Castelli
Patricia Clement
Alexander L. Cohen … (voir 100 de plus)
Sasha D’Ambrosio
Gilles de Hollander
María de la Iglesia-Vayá
Alejandro de la Vega
Arnaud Delorme
Orrin Devinsky
Dejan Draschkow
Eugene Paul Duff
E. Duff
Elizabeth DuPre
Eric Earl
Oscar Esteban
Franklin W. Feingold
Guillaume Flandin
Anthony Galassi
Giuseppe Gallitto
Melanie Ganz
Rémi Gau
James Gholam
Sulagna Dia Ghosh
Satrajit S. Ghosh
Alessio Giacomel
Ashley G. Gillman
Padraig Gleeson
Alexandre Gramfort
Samuel Guay
Giacomo Guidali
Yaroslav O. Halchenko
Daniel A. Handwerker
Nell Hardcastle
Peer Herholz
Dora Hermes
Christopher J. Honey
C. Honey
Robert B. Innis
Horea-Ioan Ioanas
Andrew Jahn
Agah Karakuzu
David B. Keator
Gregory Kiar
Balint Kincses
Angela R. Laird
Jonathan C. Lau
Alberto Lazari
Jon Haitz Legarreta
Adam Li
Xiangrui Li
Bradley C. Love
Hanzhang Lu
Eleonora Marcantoni
Camille Maumet
Giacomo Mazzamuto
Steven L. Meisler
Mark Mikkelsen
Henk Mutsaerts
Thomas E. Nichols
Aki Nikolaidis
Gustav Nilsonne
Guiomar Niso
Martin Norgaard
Thomas W. Okell
Robert Oostenveld
Eduard Ort
Patrick J. Park
Mateusz Pawlik
Cyril R. Pernet
Franco Pestilli
Jan Petr
Christophe Phillips
Jean-Baptiste Poline
Luca Pollonini
P. Raamana
Pradeep Reddy Raamana
Petra Ritter
Gaia Rizzo
Kay A. Robbins
Alexander P. Rockhill
Christine Rogers
Ariel Rokem
Chris Rorden
Alexandre Routier
Jose Manuel Saborit-Torres
Taylor Salo
Michael Schirner
Robert E. Smith
Tamas Spisak
Julia Sprenger
Nicole C. Swann
Martin Szinte
Sylvain Takerkart
Bertrand Thirion
Adam G. Thomas
Sajjad Torabian
Gael Varoquaux
Bradley Voytek
Julius Welzel
Martin Wilson
Tal Yarkoni
Krzysztof J. Gorgolewski
DyG2Vec: Efficient Representation Learning for Dynamic Graphs
Mohammad Alomrani
Mahdi Biparva
Yingxue Zhang
Temporal graph neural networks have shown promising results in learning inductive representations by automatically extracting temporal patte… (voir plus)rns. However, previous works often rely on complex memory modules or inefficient random walk methods to construct temporal representations. To address these limitations, we present an efficient yet effective attention-based encoder that leverages temporal edge encodings and window-based subgraph sampling to generate task-agnostic embeddings. Moreover, we propose a joint-embedding architecture using non-contrastive SSL to learn rich temporal embeddings without labels. Experimental results on 7 benchmark datasets indicate that on average, our model outperforms SoTA baselines on the future link prediction task by 4.23% for the transductive setting and 3.30% for the inductive setting while only requiring 5-10x less training/inference time. Lastly, different aspects of the proposed framework are investigated through experimental analysis and ablation studies. The code is publicly available at https://github.com/huawei-noah/noah-research/tree/master/graph_atlas.
JaxPruner: A concise library for sparsity research
Joo Hyung Lee
Wonpyo Park
Nicole Elyse Mitchell
Jonathan Pilault
Johan Samir Obando Ceron
Han-Byul Kim
Namhoon Lee
Elias Frantar
Yun Long
Amir Yazdanbakhsh
Shivani Agrawal
Suvinay Subramanian
Xin Wang
Sheng-Chun Kao
Xingyao Zhang
Trevor Gale
Aart J.C. Bik
Woohyun Han
Milen Ferev
Zhonglin Han … (voir 5 de plus)
Hong-Seok Kim
Yann Dauphin
Utku Evci
This paper introduces JaxPruner, an open-source JAX-based pruning and sparse training library for machine learning research. JaxPruner aims … (voir plus)to accelerate research on sparse neural networks by providing concise implementations of popular pruning and sparse training algorithms with minimal memory and latency overhead. Algorithms implemented in JaxPruner use a common API and work seamlessly with the popular optimization library Optax, which, in turn, enables easy integration with existing JAX based libraries. We demonstrate this ease of integration by providing examples in four different codebases: Scenic, t5x, Dopamine and FedJAX and provide baseline experiments on popular benchmarks.
GABAergic inhibition shapes behavior and neural dynamics in human visual working memory
Jan Kujala
Carolina Ciumas
Julien Jung
Sandrine Bouvard
Françoise Lecaignard
Amélie Lothe
Romain Bouet
Philippe Ryvlin
Abstract Neuronal inhibition, primarily mediated by GABAergic neurotransmission, is crucial for brain development and healthy cognition. Gam… (voir plus)ma-aminobutyric acid concentration levels in sensory areas have been shown to correlate with hemodynamic and oscillatory neuronal responses. How these measures relate to one another during working memory, a higher-order cognitive process, is still poorly understood. We address this gap by collecting magnetoencephalography, functional magnetic resonance imaging, and Flumazenil positron emission tomography data within the same subject cohort using an n-back working-memory paradigm. By probing the relationship between GABAA receptor distribution, neural oscillations, and Blood Oxygen Level Dependent (BOLD) modulations, we found that GABAA receptor density in higher-order cortical areas predicted the reaction times on the working-memory task and correlated positively with the peak frequency of gamma power modulations and negatively with BOLD amplitude. These findings support and extend theories linking gamma oscillations and hemodynamic responses to gamma-aminobutyric acid neurotransmission and to the excitation-inhibition balance and cognitive performance in humans. Considering the small sample size of the study, future studies should test whether these findings also hold for other, larger cohorts as well as to examine in detail how the GABAergic system and neural fluctuations jointly support working-memory task performance.
On the Stability of a non-hyperbolic nonlinear map with non-bounded set of non-isolated fixed points with applications to Machine Learning
Roberta Hansen
Matias Vera
Lautaro Estienne
LUCIANA FERRER
Towards Enhancing the Reproducibility of Deep Learning Bugs: An Empirical Study
Mehil B. Shah
Mohammad Masudur Rahman
Are LLMs Robust for Spoken Dialogues?
Seyed Mahed Mousavi
Gabriel Roccabruna
Simone Alghisi
Massimo Rizzoli
Giuseppe Riccardi
Large Pre-Trained Language Models have demonstrated state-of-the-art performance in different downstream tasks, including dialogue state tra… (voir plus)cking and end-to-end response generation. Nevertheless, most of the publicly available datasets and benchmarks on task-oriented dialogues focus on written conversations. Consequently, the robustness of the developed models to spoken interactions is unknown. In this work, we have evaluated the performance of LLMs for spoken task-oriented dialogues on the DSTC11 test sets. Due to the lack of proper spoken dialogue datasets, we have automatically transcribed a development set of spoken dialogues with a state-of-the-art ASR engine. We have characterized the ASR-error types and their distributions and simulated these errors in a large dataset of dialogues. We report the intrinsic (perplexity) and extrinsic (human evaluation) performance of fine-tuned GPT-2 and T5 models in two subtasks of response generation and dialogue state tracking, respectively. The results show that LLMs are not robust to spoken noise by default, however, fine-tuning/training such models on a proper dataset of spoken TODs can result in a more robust performance.
A primer on the use of machine learning to distil knowledge from data in biological psychiatry.
Thomas P. Quinn
Jonathan L. Hess
Victoria S. Marshe
Michelle M. Barnett
Anne-Christin Hauschild
Malgorzata Maciukiewicz
Samar S. M. Elsheikh
Xiaoyu Men
Emanuel Schwarz
Michael S. Breen
Eric J. Barnett
Yanli Zhang-James
Mehmet Eren Ahsen
Han Cao
Junfang Chen
Jiahui Hou
Asif Salekin
Ping-I Lin
Kristin K. Nicodemus … (voir 7 de plus)
Andreas Meyer-Lindenberg
Isabelle Bichindaritz
Stephen V. Faraone
Murray J. Cairns
Gaurav Pandey
Daniel J. Müller
Stephen J. Glatt
AITA: AI trustworthiness assessment
Bertrand Braunschweig
Stefan Buijsman
Faicel Chamroukhi
Fredrik Heintz
Juliette Mattioli
Maximilian Poretschkin
Asymmetry in the complexity of the multi-commodity network pricing problem
Quang Minh Bui
Jos'e Neto
A Column Generation Scheme for Distributionally Robust Multi-Item Newsvendor Problems
Shanshan Wang
This paper studies a distributionally robust multi-item newsvendor problem, where the demand distribution is unknown but specified with a ge… (voir plus)neral event-wise ambiguity set. Using the event-wise affine decision rules, we can obtain a conservative approximation formulation of the problem, which can typically be further reformulated as a linear program. In order to efficiently solve the resulting large-scale linear program, we develop a column generation-based decomposition scheme and speed up the computational efficiency by exploiting a special column selection strategy and stopping early based on a Karush-Kuhn-Tucker condition test. Focusing on the Wasserstein ambiguity set and the event-wise mean absolute deviation set, a computational study demonstrates both the computational efficiency of the proposed algorithm, which significantly outperforms a commercial solver and a Benders decomposition method, and the out-of-sample superiority of distributionally robust solutions relative to their sample average approximation counterparts. History: Accepted by Nicola Secomandi, Area Editor for Stochastic Models & Reinforcement Learning. Funding: This work was supported by the Natural Sciences and Engineering Research Council of Canada [492997-2016, RGPIN-2016-05208], the National Natural Science Foundation of China [71972012], Alliance de recherche numérique du Canada, and Canada Research Chairs [CRC-2018-00105]. It was also supported by Groupe d’études et de recherche en analyse des décisions (GERAD). Finally, this research was enabled in part by support provided by Digital Research Alliance of Canada ( https://alliancecan.ca/en ). Supplemental Material: The software that supports the findings of this study is available within the paper and its supplemental information ( https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2022.0010 ) as well as from the IJOC GitHub software repository ( https://github.com/INFORMSJoC/2022.0010 ). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/ .
Dataset Difficulty and the Role of Inductive Bias
Devin Kwok
Nikhil Anand
Jonathan Frankle
David Rolnick
Motivated by the goals of dataset pruning and defect identification, a growing body of methods have been developed to score individual examp… (voir plus)les within a dataset. These methods, which we call"example difficulty scores", are typically used to rank or categorize examples, but the consistency of rankings between different training runs, scoring methods, and model architectures is generally unknown. To determine how example rankings vary due to these random and controlled effects, we systematically compare different formulations of scores over a range of runs and model architectures. We find that scores largely share the following traits: they are noisy over individual runs of a model, strongly correlated with a single notion of difficulty, and reveal examples that range from being highly sensitive to insensitive to the inductive biases of certain model architectures. Drawing from statistical genetics, we develop a simple method for fingerprinting model architectures using a few sensitive examples. These findings guide practitioners in maximizing the consistency of their scores (e.g. by choosing appropriate scoring methods, number of runs, and subsets of examples), and establishes comprehensive baselines for evaluating scores in the future.