Publications

Offline Model-Based Optimization: Comprehensive Review
Jiayao Gu
Zixuan Liu
Can Chen
Offline optimization is a fundamental challenge in science and engineering, where the goal is to optimize black-box functions using only off… (voir plus)line datasets. This setting is particularly relevant when querying the objective function is prohibitively expensive or infeasible, with applications spanning protein engineering, material discovery, neural architecture search, and beyond. The main difficulty lies in accurately estimating the objective landscape beyond the available data, where extrapolations are fraught with significant epistemic uncertainty. This uncertainty can lead to objective hacking(reward hacking), exploiting model inaccuracies in unseen regions, or other spurious optimizations that yield misleadingly high performance estimates outside the training distribution. Recent advances in model-based optimization(MBO) have harnessed the generalization capabilities of deep neural networks to develop offline-specific surrogate and generative models. Trained with carefully designed strategies, these models are more robust against out-of-distribution issues, facilitating the discovery of improved designs. Despite its growing impact in accelerating scientific discovery, the field lacks a comprehensive review. To bridge this gap, we present the first thorough review of offline MBO. We begin by formalizing the problem for both single-objective and multi-objective settings and by reviewing recent benchmarks and evaluation metrics. We then categorize existing approaches into two key areas: surrogate modeling, which emphasizes accurate function approximation in out-of-distribution regions, and generative modeling, which explores high-dimensional design spaces to identify high-performing designs. Finally, we examine the key challenges and propose promising directions for advancement in this rapidly evolving field including safe control of superintelligent systems.
Offline Model-Based Optimization: Comprehensive Review
Jiayao Gu
Zixuan Liu
Can Chen
RL4Med-DDPO: Reinforcement Learning for Controlled Guidance Towards Diverse Medical Image Generation using Vision-Language Foundation Models
Meditation induces shifts in neural oscillations, brain complexity and critical dynamics: Novel insights from MEG
Annalisa Pascarella
David Meunier
Jordan O’Byrne
Tarek Lajnef
Antonino Raffone
Roberto Guidotti
Vittorio Pizzella
Laura Marzetti
UI-Vision: A Desktop-centric GUI Benchmark for Visual Perception and Interaction
Xiangru Jian
Kevin Qinghong Lin
Juan A. Rodriguez
Montek Kalsi
M. T. ¨Ozsu
David Vazquez
Sai Rajeswar
Human Annotator
Hitting the right pitch: Cortical tracking of fundamental frequency changes across speech rates in auditory and sensorimotor regions
Yorguin-Jose Mantilla-Ramos
Ana-Sofía Hincapié-Casas
Annalisa Pascarella
Tarek Lajnef
Richard M. Leahy
Emily B.J. Coffey
Véronique Boulenger
Tapered Off-Policy REINFORCE: Stable and efficient reinforcement learning for LLMs
Tapered Off-Policy REINFORCE: Stable and efficient reinforcement learning for LLMs
Sparse Decomposition of Graph Neural Networks
Yaochen Hu
Mai Zeng
Ge Zhang
Pavel Rumiantsev
Yingxue Zhang
Negotiative Alignment: Embracing Disagreement to Achieve Fairer Outcomes -- Insights from Urban Studies
Sample Compression for Continual Learning
Sample Compression for Continual Learning