Publications

An enhanced wideband tracking method for characteristic modes
Chao Huang
Chenjiang Guo
Xia Ma
Yi Yuan
An enhanced wideband tracking method for characteristic modes (CMs) is investigated in this paper. The method consists of three stages, and … (see more)its core tracking stage (CTS) is based on a classical eigenvector correlation-based algorithm. To decrease the tracking time and eliminate the crossing avoidance (CRA), we append a commonly used eigenvalue filter (EF) as the preprocessing stage and a novel postprocessing stage to the CTS. The proposed postprocessing stage can identify all CRA mode pairs by analyzing their trajectory and correlation characteristics. Subsequently, it can predict corresponding CRA frequencies and correct problematic qualities rapidly. Considering potential variations in eigenvector numbers at consecutive frequency samples caused by the EF, a new execution condition for the adaptive frequency adjustment in the CTS is introduced. Finally, CMs of a conductor plate and a fractal structure are investigated to demonstrate the performance of the proposed method, and the obtained results are discussed.
Imagining a Future of Designing with AI: Dynamic Grounding, Constructive Negotiation, and Sustainable Motivation
Priyan Vaithilingam
Elena L. Glassman
Leveraging Function Space Aggregation for Federated Learning at Scale
Nikita Dhawan
Nicole Elyse Mitchell
Zachary Charles
Zachary Garrett
The federated learning paradigm has motivated the development of methods for aggregating multiple client updates into a global server model,… (see more) without sharing client data. Many federated learning algorithms, including the canonical Federated Averaging (FedAvg), take a direct (possibly weighted) average of the client parameter updates, motivated by results in distributed optimization. In this work, we adopt a function space perspective and propose a new algorithm, FedFish, that aggregates local approximations to the functions learned by clients, using an estimate based on their Fisher information. We evaluate FedFish on realistic, large-scale cross-device benchmarks. While the performance of FedAvg can suffer as client models drift further apart, we demonstrate that FedFish is more robust to longer local training. Our evaluation across several settings in image and language benchmarks shows that FedFish outperforms FedAvg as local training epochs increase. Further, FedFish results in global networks that are more amenable to efficient personalization via local fine-tuning on the same or shifted data distributions. For instance, federated pretraining on the C4 dataset, followed by few-shot personalization on Stack Overflow, results in a 7% improvement in next-token prediction by FedFish over FedAvg.
Metrics reloaded: Pitfalls and recommendations for image analysis validation
Lena Maier-Hein
Annika Reinke
Evangelia Christodoulou
Ben Glocker
Patrick Godau
Fabian Isensee
Jens Kleesiek
Michal Kozubek
Mauricio Reyes
Michael A. Riegler
Manuel Wiesenfarth
Michael Baumgartner
Matthias Eisenmann
Doreen Heckmann-Notzel
A. Emre Kavu
Tim Radsch
Minu Dietlinde Tizabi
Laura C. Acion
Michela Antonelli
Spyridon Bakas
Peter Bankhead
Allison Benis
M. Cardoso
Veronika Cheplygina
Beth A. Cimini
Gary S. Collins
Keyvan Farahani
Bram van Ginneken
Daniel A. Hashimoto
Michael M. Hoffman
Merel Huisman
Pierre Jannin
Charles E. Kahn
A. Karargyris
Alan Karthikesalingam
H. Kenngott
Annette Kopp-Schneider
Anna Kreshuk
Tahsin Kurc
Bennett Landman
G. Litjens
Amin Madani
Klaus Maier-Hein
Anne L. Martel
Peter Mattson
Erik H. W. Meijering
Bjoern Menze
David Moher
K. Moons
Henning Müller
Felix Nickel
Brennan Nichyporuk
Jens Petersen
Nasir M. Rajpoot
Nicola Rieke
Julio Saez-Rodriguez
Clarisa S'anchez Guti'errez
Shravya Jaganath Shetty
M. Smeden
Carole H. Sudre
Ronald M. Summers
Abdel Aziz Taha
Sotirios A. Tsaftaris
B. Calster
Gael Varoquaux
Paul F. Jäger
Nearest Neighbour Score Estimators for Diffusion Generative Models
Matthew Niedoba
Dylan Green
Saeid Naderiparizi
Vasileios Lioutas
Jonathan Wilder Lavington
Xiaoxuan Liang
Yunpeng Liu
Ke Zhang
Setareh Dabiri
Adam Ścibior
Berend Zwartsenberg
The Leukemoid Reaction in Severe Alcoholic Hepatitis: A Case Report
Sachin Agrawal
Sunil Kumar
Sourya Acharya
Iterated Denoising Energy Matching for Sampling from Boltzmann Densities
Tara Akhound-Sadegh
Jarrid Rector-Brooks
Joey Bose
Sarthak Mittal
Pablo Lemos
Cheng-Hao Liu
Marcin Sendera
Nikolay Malkin
Alexander Tong
Efficiently generating statistically independent samples from an unnormalized probability distribution, such as equilibrium samples of many-… (see more)body systems, is a foundational problem in science. In this paper, we propose Iterated Denoising Energy Matching (iDEM), an iterative algorithm that uses a novel stochastic score matching objective leveraging solely the energy function and its gradient -- and no data samples -- to train a diffusion-based sampler. Specifically, iDEM alternates between (I) sampling regions of high model density from a diffusion-based sampler and (II) using these samples in our stochastic matching objective to further improve the sampler. iDEM is scalable to high dimensions as the inner matching objective, is simulation-free, and requires no MCMC samples. Moreover, by leveraging the fast mode mixing behavior of diffusion, iDEM smooths out the energy landscape enabling efficient exploration and learning of an amortized sampler. We evaluate iDEM on a suite of tasks ranging from standard synthetic energy functions to invariant
Reinforcement Learning for Blind Stair Climbing with Legged and Wheeled-Legged Robots
Simon Chamorro
Victor Klemm
Miguel I. Valls
Roland Siegwart
On the Privacy of Selection Mechanisms with Gaussian Noise
Jonathan Lebensold
Borja Balle
V-STaR: Training Verifiers for Self-Taught Reasoners
Arian Hosseini
Xingdi Yuan
Nikolay Malkin
Rishabh Agarwal
Common self-improvement approaches for large language models (LLMs), such as STaR (Zelikman et al., 2022), iteratively fine-tune LLMs on sel… (see more)f-generated solutions to improve their problem-solving ability. However, these approaches discard the large amounts of incorrect solutions generated during this process, potentially neglecting valuable information in such solutions. To address this shortcoming, we propose V-STaR that utilizes both the correct and incorrect solutions generated during the self-improvement process to train a verifier using DPO that judges correctness of model-generated solutions. This verifier is used at inference time to select one solution among many candidate solutions. Running V-STaR for multiple iterations results in progressively better reasoners and verifiers, delivering a 4% to 17% test accuracy improvement over existing self-improvement and verification approaches on common code generation and math reasoning benchmarks with LLaMA2 models.
Deep Learning for Data-Driven Districting-and-Routing
Arthur Ferraz
Thibaut Vidal
Implicit Diffusion: Efficient Optimization through Stochastic Sampling
Pierre Marion
Anna Korba
Peter Bartlett
Mathieu Blondel
Valentin De Bortoli
Arnaud Doucet
Felipe Llinares-L'opez
Quentin Berthet