Publications

Pushing the frontiers in climate modelling and analysis with machine learning
Veronika Eyring
William D. Collins
Pierre Gentine
Elizabeth A. Barnes
Marcelo Barreiro
Tom Beucler
Marc Bocquet
Christopher S. Bretherton
Hannah M. Christensen
Katherine Dagon
David John Gagne
David Hall
Dorit Hammerling
Stephan Hoyer
Fernando Iglesias-Suarez
Ignacio Lopez-Gomez
Marie C. McGraw
Gerald A. Meehl
Maria J. Molina
Claire Monteleoni … (voir 9 de plus)
Juliane Mueller
Michael S. Pritchard
Jakob Runge
Philip Stier
Oliver Watt-Meyer
Katja Weigel
Rose Yu
Laure Zanna
Development of a Framework for Establishing 'Gold Standard' Outbreak Data from Submitted SARS-CoV-2 Genome Samples
Yannan Shen
Russell Steele
Submitted genomic data for respiratory viruses reflect the emergence and spread of new variants. Although delays in submission limit the uti… (voir plus)lity of these data for prospective surveillance, they may be useful for evaluating other surveillance sources. However, few studies have investigated the use of these data for evaluating aberration detection in surveillance systems. Our study used a Bayesian online change point detection algorithm (BOCP) to detect increases in the number of submitted genome samples as a means of establishing 'gold standard' dates of outbreak onset in multiple countries. We compared models using different data transformations and parameter values. BOCP detected change points that were not sensitive to different parameter settings. We also found data transformations were essential prior to change point detection. Our study presents a framework for using global genomic submission data to develop 'gold standard' dates about the onset of outbreaks due to new viral variants.
INTREPPPID - An Orthologue-Informed Quintuplet Network for Cross-Species Prediction of Protein-Protein Interaction
Joseph Szymborski
An overwhelming majority of protein-protein interaction (PPI) studies are conducted in a select few model organisms largely due to constrain… (voir plus)ts in time and cost of the associated “wet lab” experiments. In silico PPI inference methods are ideal tools to overcome these limitations, but often struggle with cross-species predictions. We present INTREPPPID, a method which incorporates orthology data using a new “quintuplet” neural network, which is constructed with five parallel encoders with shared parameters. INTREPPPID incorporates both a PPI classification task and an orthologous locality task. The latter learns embeddings of orthologues that have small Euclidean distances between them and large distances between embeddings of all other proteins. INTREPPPID outperforms all other leading PPI inference methods tested on both the intra-species and cross-species tasks using strict evaluation datasets. We show that INTREPPPID’s orthologous locality loss increases performance because of the biological relevance of the orthologue data, and not due to some other specious aspect of the architecture. Finally, we introduce PPI.bio and PPI Origami, a web server interface for INTREPPPID and a software tool for creating strict evaluation datasets, respectively. Together, these two initiatives aim to make both the use and development of PPI inference tools more accessible to the community. GRAPHICAL ABSTRACT
Learning Valid Dual Bounds in Constraint Programming: Boosted Lagrangian Decomposition with Self-Supervised Learning
Swann Bessa
Darius Dabert
Max Bourgeat
Louis-Martin Rousseau
One hundred years of EEG for brain and behaviour research
Faisal Mushtaq
Dominik Welke
Anne Gallagher
Yuri G. Pavlov
Layla Kouara
Jorge Bosch-Bayard
Jasper JF van den Bosch
Mahnaz Arvaneh
Amy R. Bland
Maximilien Chaumon
Cornelius Borck
Xun He
Steven J. Luck
Maro G. Machizawa
Cyril Pernet
Aina Puce
Sidney J. Segalowitz
Christine Rogers
Muhammad Awais
Claudio Babiloni … (voir 75 de plus)
Neil W. Bailey
Sylvain Baillet
Robert C. A. Bendall
Daniel Brady
Maria L. Bringas-Vega
Niko A. Busch
Ana Calzada-Reyes
Armand Chatard
Peter E. Clayson
Michael X. Cohen
Jonathan Cole
Martin Constant
Alexandra Corneyllie
Damien Coyle
Damian Cruse
Ioannis Delis
Arnaud Delorme
Damien Fair
Tiago H. Falk
Matthias Gamer
Giorgio Ganis
Kilian Gloy
Samantha Gregory
Cameron D. Hassall
Katherine E. Hiley
Richard B. Ivry
Michael Jenkins
Jakob Kaiser
Andreas Keil
Robert T. Knight
Silvia Kochen
Boris Kotchoubey
Olave E. Krigolson
Nicolas Langer
Heinrich R. Liesefeld
Sarah Lippé
Raquel E. London
Annmarie MacNamara
Scott Makeig
Welber Marinovic
Eduardo Martínez-Montes
Aleya A. Marzuki
Ryan K. Mathew
Christoph Michel
José d. R. Millán
Mark Mon-Williams
Lilia Morales-Chacón
Richard Naar
Gustav Nilsonne
Guiomar Niso
Erika Nyhus
Robert Oostenveld
Katharina Paul
Walter Paulus
Daniela M. Pfabigan
Gilles Pourtois
Stefan Rampp
Manuel Rausch
Kay Robbins
Paolo M. Rossini
Manuela Ruzzoli
Barbara Schmidt
Magdalena Senderecka
Narayanan Srinivasan
Yannik Stegmann
Paul M. Thompson
Mitchell Valdes-Sosa
Melle J. W. van der Molen
Domenica Veniero
Edelyn Verona
Bradley Voytek
Dezhong Yao
Alan C. Evans
Pedro Valdes-Sosa
Zero-Shot Object-Centric Representation Learning
Aniket Rajiv Didolkar
Andrii Zadaianchuk
Anirudh Goyal
Michael Curtis Mozer
Georg Martius
Maximilian Seitzer
The goal of object-centric representation learning is to decompose visual scenes into a structured representation that isolates the entities… (voir plus). Recent successes have shown that object-centric representation learning can be scaled to real-world scenes by utilizing pre-trained self-supervised features. However, so far, object-centric methods have mostly been applied in-distribution, with models trained and evaluated on the same dataset. This is in contrast to the wider trend in machine learning towards general-purpose models directly applicable to unseen data and tasks. Thus, in this work, we study current object-centric methods through the lens of zero-shot generalization by introducing a benchmark comprising eight different synthetic and real-world datasets. We analyze the factors influencing zero-shot performance and find that training on diverse real-world images improves transferability to unseen scenarios. Furthermore, inspired by the success of task-specific fine-tuning in foundation models, we introduce a novel fine-tuning strategy to adapt pre-trained vision encoders for the task of object discovery. We find that the proposed approach results in state-of-the-art performance for unsupervised object discovery, exhibiting strong zero-shot transfer to unseen datasets.
Understanding the Local Geometry of Generative Model Manifolds
Ahmed Imtiaz Humayun
Ibtihel Amara
Candice Schumann
Mohammad Havaei
Deep generative models learn continuous representations of complex data manifolds using a finite number of samples during training. For a pr… (voir plus)e-trained generative model, the common way to evaluate the quality of the manifold representation learned, is by computing global metrics like Fr\'echet Inception Distance using a large number of generated and real samples. However, generative model performance is not uniform across the learned manifold, e.g., for \textit{foundation models} like Stable Diffusion generation performance can vary significantly based on the conditioning or initial noise vector being denoised. In this paper we study the relationship between the \textit{local geometry of the learned manifold} and downstream generation. Based on the theory of continuous piecewise-linear (CPWL) generators, we use three geometric descriptors - scaling (
<scp>RF</scp> shimming in the cervical spinal cord at <scp>7 T</scp>
Daniel Papp
Kyle M. Gilbert
Gaspard Cereza
Alexandre D'Astous
Nibardo Lopez‐Rios
Mathieu Boudreau
Marcus J. Couch
Pedram Yazdanbakhsh
Robert L. Barry
Eva Alonso‐Ortiz
A Survey on Model MoErging: Recycling and Routing Among Specialized Experts for Collaborative Learning
Prateek Yadav
Colin Raffel
Mohammed Muqeeth
Lucas Caccia
Haokun Liu
Tianlong Chen
Mohit Bansal
Leshem Choshen
The availability of performant pre-trained models has led to a proliferation of fine-tuned expert models that are specialized to a particula… (voir plus)r domain or task. Model MoErging methods aim to recycle expert models to create an aggregate system with improved performance or generalization. A key component of MoErging methods is the creation of a router that decides which expert model(s) to use for a particular input or application. The promise, effectiveness, and large design space of MoErging has spurred the development of many new methods over the past few years. This rapid pace of development has made it challenging to compare different MoErging methods, which are rarely compared to one another and are often validated in different experimental setups. To remedy such gaps, we present a comprehensive survey of MoErging methods that includes a novel taxonomy for cataloging key design choices and clarifying suitable applications for each method. Apart from surveying MoErging research, we inventory software tools and applications that make use of MoErging. We additionally discuss related fields of study such as model merging, multitask learning, and mixture-of-experts models. Taken as a whole, our survey provides a unified overview of existing MoErging methods and creates a solid foundation for future work in this burgeoning field.
Unveiling the Flaws: A Critical Analysis of Initialization Effect on Time Series Anomaly Detection
Alex Koran
Hadi Hojjati
Deep learning for time-series anomaly detection (TSAD) has gained significant attention over the past decade. Despite the reported improveme… (voir plus)nts in several papers, the practical application of these models remains limited. Recent studies have cast doubt on these models, attributing their results to flawed evaluation techniques. However, the impact of initialization has largely been overlooked. This paper provides a critical analysis of the initialization effects on TSAD model performance. Our extensive experiments reveal that TSAD models are highly sensitive to hyperparameters such as window size, seed number, and normalization. This sensitivity often leads to significant variability in performance, which can be exploited to artificially inflate the reported efficacy of these models. We demonstrate that even minor changes in initialization parameters can result in performance variations that overshadow the claimed improvements from novel model architectures. Our findings highlight the need for rigorous evaluation protocols and transparent reporting of preprocessing steps to ensure the reliability and fairness of anomaly detection methods. This paper calls for a more cautious interpretation of TSAD advancements and encourages the development of more robust and transparent evaluation practices to advance the field and its practical applications.
Can a Bayesian Oracle Prevent Harm from an Agent?
Michael K. Cohen
Nikolay Malkin
Matt MacDermott
Damiano Fornasiere
Pietro Greiner
Younesse Kaddar
Is there a way to design powerful AI systems based on machine learning methods that would satisfy probabilistic safety guarantees? With the … (voir plus)long-term goal of obtaining a probabilistic guarantee that would apply in every context, we consider estimating a context-dependent bound on the probability of violating a given safety specification. Such a risk evaluation would need to be performed at run-time to provide a guardrail against dangerous actions of an AI. Noting that different plausible hypotheses about the world could produce very different outcomes, and because we do not know which one is right, we derive bounds on the safety violation probability predicted under the true but unknown hypothesis. Such bounds could be used to reject potentially dangerous actions. Our main results involve searching for cautious but plausible hypotheses, obtained by a maximization that involves Bayesian posteriors over hypotheses. We consider two forms of this result, in the iid case and in the non-iid case, and conclude with open problems towards turning such theoretical results into practical AI guardrails.
Revisiting Feature Prediction for Learning Visual Representations from Video
Adrien Bardes
Quentin Garrido
Jean Ponce
Xinlei Chen
Yann LeCun
Mahmoud Assran
Nicolas Ballas