Publications

Understanding metric-related pitfalls in image analysis validation
Annika Reinke
Minu Dietlinde Tizabi
Michael Baumgartner
Matthias Eisenmann
DOREEN HECKMANN-NÖTZEL
A. EMRE KAVUR
TIM RÄDSCH
Carole H. Sudre
LAURA ACION
Michela Antonelli
Spyridon Bakas
Allison Benis
Arriel Benis
Matthew Blaschko
FLORIAN BUETTNER
Florian Buttner
M. Jorge Cardoso
Veronika Cheplygina
JIANXU CHEN … (see 62 more)
Evangelia Christodoulou
BETH A. CIMINI
Gary S. Collins
Keyvan Farahani
LUCIANA FERRER
Adrian Galdran
Bram van Ginneken
Ben Glocker
PATRICK GODAU
Robert Cary Haase
Daniel A. Hashimoto
Michael M. Hoffman
Merel Huisman
Fabian Isensee
Pierre Jannin
CHARLES E. KAHN
DAGMAR KAINMUELLER
BERNHARD KAINZ
ALEXANDROS KARARGYRIS
Alan Karthikesalingam
H. Kenngott
Jens Kleesiek
Florian Kofler
THIJS KOOI
Annette Kopp-Schneider
Michal Kozubek
Anna Kreshuk
Tahsin Kurc
Bennett A. Landman
GEERT LITJENS
Amin Madani
Klaus Maier-Hein
Anne L. Martel
Peter Mattson
ERIK MEIJERING
Bjoern Menze
KAREL G.M. MOONS
Henning Müller
Brennan Nichyporuk
Felix Nickel
Jens Petersen
SUSANNE M. RAFELSKI
NASIR RAJPOOT
Mauricio Reyes
MICHAEL A. RIEGLER
Nicola Rieke
Julio Saez-Rodriguez
Ben Van Calster
Clara I. Sánchez
SHRAVYA SHETTY
ZIV R. YANIV
M. Smeden
Ronald M. Summers
Abdel Aziz Taha
ALEKSEI TIULPIN
Sotirios A. Tsaftaris
B. Calster
Gael Varoquaux
Manuel Wiesenfarth
Ziv Rafael Yaniv
PAUL F. JÄGER
Lena Maier-Hein
The Leukemoid Reaction in Severe Alcoholic Hepatitis: A Case Report
Sachin Agrawal
Sunil Kumar
Sourya Acharya
Iterated Denoising Energy Matching for Sampling from Boltzmann Densities
Tara Akhound-Sadegh
Jarrid Rector-Brooks
Joey Bose
Sarthak Mittal
Pablo Lemos
Cheng-Hao Liu
Marcin Sendera
Nikolay Malkin
Alexander Tong
Efficiently generating statistically independent samples from an unnormalized probability distribution, such as equilibrium samples of many-… (see more)body systems, is a foundational problem in science. In this paper, we propose Iterated Denoising Energy Matching (iDEM), an iterative algorithm that uses a novel stochastic score matching objective leveraging solely the energy function and its gradient -- and no data samples -- to train a diffusion-based sampler. Specifically, iDEM alternates between (I) sampling regions of high model density from a diffusion-based sampler and (II) using these samples in our stochastic matching objective to further improve the sampler. iDEM is scalable to high dimensions as the inner matching objective, is simulation-free, and requires no MCMC samples. Moreover, by leveraging the fast mode mixing behavior of diffusion, iDEM smooths out the energy landscape enabling efficient exploration and learning of an amortized sampler. We evaluate iDEM on a suite of tasks ranging from standard synthetic energy functions to invariant
On the Privacy of Selection Mechanisms with Gaussian Noise
Jonathan Lebensold
Borja Balle
V-STaR: Training Verifiers for Self-Taught Reasoners
Arian Hosseini
Xingdi Yuan
Nikolay Malkin
Rishabh Agarwal
Common self-improvement approaches for large language models (LLMs), such as STaR (Zelikman et al., 2022), iteratively fine-tune LLMs on sel… (see more)f-generated solutions to improve their problem-solving ability. However, these approaches discard the large amounts of incorrect solutions generated during this process, potentially neglecting valuable information in such solutions. To address this shortcoming, we propose V-STaR that utilizes both the correct and incorrect solutions generated during the self-improvement process to train a verifier using DPO that judges correctness of model-generated solutions. This verifier is used at inference time to select one solution among many candidate solutions. Running V-STaR for multiple iterations results in progressively better reasoners and verifiers, delivering a 4% to 17% test accuracy improvement over existing self-improvement and verification approaches on common code generation and math reasoning benchmarks with LLaMA2 models.
Deep Learning for Data-Driven Districting-and-Routing
Arthur Ferraz
Thibaut Vidal
Implicit Diffusion: Efficient Optimization through Stochastic Sampling
Pierre Marion
Anna Korba
Peter Bartlett
Mathieu Blondel
Valentin De Bortoli
Arnaud Doucet
Felipe Llinares-L'opez
Quentin Berthet
In-Context Learning Can Re-learn Forbidden Tasks
Sophie Xhonneux
David Dobre
Despite significant investment into safety training, large language models (LLMs) deployed in the real world still suffer from numerous vuln… (see more)erabilities. One perspective on LLM safety training is that it algorithmically forbids the model from answering toxic or harmful queries. To assess the effectiveness of safety training, in this work, we study forbidden tasks, i.e., tasks the model is designed to refuse to answer. Specifically, we investigate whether in-context learning (ICL) can be used to re-learn forbidden tasks despite the explicit fine-tuning of the model to refuse them. We first examine a toy example of refusing sentiment classification to demonstrate the problem. Then, we use ICL on a model fine-tuned to refuse to summarise made-up news articles. Finally, we investigate whether ICL can undo safety training, which could represent a major security risk. For the safety task, we look at Vicuna-7B, Starling-7B, and Llama2-7B. We show that the attack works out-of-the-box on Starling-7B and Vicuna-7B but fails on Llama2-7B. Finally, we propose an ICL attack that uses the chat template tokens like a prompt injection attack to achieve a better attack success rate on Vicuna-7B and Starling-7B. Trigger Warning: the appendix contains LLM-generated text with violence, suicide, and misinformation.
When is Momentum Extragradient Optimal? A Polynomial-Based Analysis
Junhyung Lyle Kim
Anastasios Kyrillidis
Fabian Pedregosa
The extragradient method has gained popularity due to its robust convergence properties for differentiable games. Unlike single-objective op… (see more)timization, game dynamics involve complex interactions reflected by the eigenvalues of the game vector field's Jacobian scattered across the complex plane. This complexity can cause the simple gradient method to diverge, even for bilinear games, while the extragradient method achieves convergence. Building on the recently proven accelerated convergence of the momentum extragradient method for bilinear games \citep{azizian2020accelerating}, we use a polynomial-based analysis to identify three distinct scenarios where this method exhibits further accelerated convergence. These scenarios encompass situations where the eigenvalues reside on the (positive) real line, lie on the real line alongside complex conjugates, or exist solely as complex conjugates. Furthermore, we derive the hyperparameters for each scenario that achieve the fastest convergence rate.
Channel-Selective Normalization for Label-Shift Robust Test-Time Adaptation
Pedro Vianna
Muawiz Chaudhary
Paria Mehrbod
An Tang
Guy Cloutier
Michael Eickenberg
Deep neural networks have useful applications in many different tasks, however their performance can be severely affected by changes in the … (see more)data distribution. For example, in the biomedical field, their performance can be affected by changes in the data (different machines, populations) between training and test datasets. To ensure robustness and generalization to real-world scenarios, test-time adaptation has been recently studied as an approach to adjust models to a new data distribution during inference. Test-time batch normalization is a simple and popular method that achieved compelling performance on domain shift benchmarks. It is implemented by recalculating batch normalization statistics on test batches. Prior work has focused on analysis with test data that has the same label distribution as the training data. However, in many practical applications this technique is vulnerable to label distribution shifts, sometimes producing catastrophic failure. This presents a risk in applying test time adaptation methods in deployment. We propose to tackle this challenge by only selectively adapting channels in a deep network, minimizing drastic adaptation that is sensitive to label shifts. Our selection scheme is based on two principles that we empirically motivate: (1) later layers of networks are more sensitive to label shift (2) individual features can be sensitive to specific classes. We apply the proposed technique to three classification tasks, including CIFAR10-C, Imagenet-C, and diagnosis of fatty liver, where we explore both covariate and label distribution shifts. We find that our method allows to bring the benefits of TTA while significantly reducing the risk of failure common in other methods, while being robust to choice in hyperparameters.
On diffusion models for amortized inference: Benchmarking and improving stochastic control and sampling
Marcin Sendera
Minsu Kim
Sarthak Mittal
Pablo Lemos
Luca Scimeca
Jarrid Rector-Brooks
Alexandre Adam
Nikolay Malkin
We study the problem of training diffusion models to sample from a distribution with a given unnormalized density or energy function. We ben… (see more)chmark several diffusion-structured inference methods, including simulation-based variational approaches and off-policy methods (continuous generative flow networks). Our results shed light on the relative advantages of existing algorithms while bringing into question some claims from past work. We also propose a novel exploration strategy for off-policy methods, based on local search in the target space with the use of a replay buffer, and show that it improves the quality of samples on a variety of target distributions. Our code for the sampling methods and benchmarks studied is made public at https://github.com/GFNOrg/gfn-diffusion as a base for future work on diffusion models for amortized inference.
E(3)-Equivariant Mesh Neural Networks
Thuan N.a. Trang
Nhat-Khang Ngô
Daniel Levy
Thieu N. Vo
Truong Son Hy
Triangular meshes are widely used to represent three-dimensional objects. As a result, many recent works have address the need for geometric… (see more) deep learning on 3D mesh. However, we observe that the complexities in many of these architectures does not translate to practical performance, and simple deep models for geometric graphs are competitive in practice. Motivated by this observation, we minimally extend the update equations of E(n)-Equivariant Graph Neural Networks (EGNNs) (Satorras et al., 2021) to incorporate mesh face information, and further improve it to account for long-range interactions through hierarchy. The resulting architecture, Equivariant Mesh Neural Network (EMNN), outperforms other, more complicated equivariant methods on mesh tasks, with a fast run-time and no expensive pre-processing. Our implementation is available at https://github.com/HySonLab/EquiMesh