Publications

ICLR 2025 Workshop on Tackling Climate Change with Machine Learning: Data-Centric Approaches in ML for Climate Action
Konstantin Klemmer
Melissa Chapman
Lily Xu
Poon Kin Ho
Mélisande Teng
Patrick Emami
Climate change is one of the greatest problems society has ever faced, with increasingly severe consequences for humanity as natural disaste… (see more)rs multiply, sea levels rise, and ecosystems falter. While no silver bullet, machine learning can be an invaluable tool in fighting climate change via a wide array of applications and techniques, from designing smart electric grids to tracking greenhouse gas emissions through satellite imagery. These applications require algorithmic innovations in machine learning and close collaboration with diverse fields and practitioners. This workshop is intended as a forum for those in the global machine learning community who wish to help tackle climate change, and is further aimed to help foster cross-pollination between researchers in machine learning and experts in complementary climate-relevant fields. Building on our past workshops on this topic, this workshop particularly aims to explore data-centric ML approaches for climate action. Data-centric ML is not only a timely topic within the ICLR community, as analyzing and engineering (pre)training datasets becomes increasingly important, but holds specific challenges and opportunities in climate-related areas. We also want to take the opportunity of ICLR being hosted in Singapore to engage with local communities and shine a light on work that deploys, analyzes or critiques ML methods and their use for climate change adaptation and mitigation on the Asian continent.
An identification of models to help in the design of national strategies and policies to reduce greenhouse gas emissions.
Danielle Maia de Souza
Radhwane Boukelouha
Catherine Morency
Normand Mousseau
Martin Trépanier
Implicit Diffusion: Efficient Optimization through Stochastic Sampling
Pierre Marion
Anna Korba
Peter Bartlett
Mathieu Blondel
Valentin De Bortoli
Arnaud Doucet
Felipe Llinares-López
Quentin Berthet
Improving Robustness and Reliability in Medical Image Classification with Latent-Guided Diffusion and Nested-Ensembles
Once deployed, medical image analysis methods are often faced with unexpected image corruptions and noise perturbations. These unknown covar… (see more)iate shifts present significant challenges to deep learning based methods trained on "clean" images. This often results in unreliable predictions and poorly calibrated confidence, hence hindering clinical applicability. While recent methods have been developed to address specific issues such as confidence calibration or adversarial robustness, no single framework effectively tackles all these challenges simultaneously. To bridge this gap, we propose LaDiNE, a novel ensemble learning method combining the robustness of Vision Transformers with diffusion-based generative models for improved reliability in medical image classification. Specifically, transformer encoder blocks are used as hierarchical feature extractors that learn invariant features from images for each ensemble member, resulting in features that are robust to input perturbations. In addition, diffusion models are used as flexible density estimators to estimate member densities conditioned on the invariant features, leading to improved modeling of complex data distributions while retaining properly calibrated confidence. Extensive experiments on tuberculosis chest X-rays and melanoma skin cancer datasets demonstrate that LaDiNE achieves superior performance compared to a wide range of state-of-the-art methods by simultaneously improving prediction accuracy and confidence calibration under unseen noise, adversarial perturbations, and resolution degradation.
Incorporating Spatial Information into Goal-Conditioned Hierarchical Reinforcement Learning via Graph Representations
The integration of graphs with Goal-conditioned Hierarchical Reinforcement Learning (GCHRL) has recently gained attention, as intermediate g… (see more)oals (subgoals) can be effectively sampled from graphs that naturally represent the overall task structure in most RL tasks. However, existing approaches typically rely on domain-specific knowledge to construct these graphs, limiting their applicability to new tasks. Other graph-based approaches create graphs dynamically during exploration but struggle to fully utilize them, because they have problems passing the information in the graphs to newly visited states. Additionally, current GCHRL methods face challenges such as sample inefficiency and poor subgoal representation. This paper proposes a solution to these issues by developing a graph encoder-decoder to evaluate unseen states. Our proposed method, Graph-Guided sub-Goal representation Generation RL (G4RL), can be incorporated into any existing GCHRL method when operating in environments with primarily symmetric and reversible transitions to enhance performance across this class of problems. We show that the graph encoder-decoder can be effectively implemented using a network trained on the state graph generated during exploration. Empirical results indicate that leveraging high and low-level intrinsic rewards from the graph encoder-decoder significantly enhances the performance of state-of-the-art GCHRL approaches with an extra small computational cost in dense and sparse reward environments.
Inference-Aware Fine-Tuning for Best-of-N Sampling in Large Language Models
Yinlam Chow
Guy Tennenholtz
Izzeddin Gur
Vincent Zhuang
Bo Dai
Sridhar Thiagarajan
Craig Boutilier
Aviral Kumar
Aleksandra Faust
Recent studies have indicated that effectively utilizing inference-time compute is crucial for attaining better performance from large langu… (see more)age models (LLMs). In this work, we propose a novel inference-aware fine-tuning paradigm, in which the model is fine-tuned in a manner that directly optimizes the performance of the inference-time strategy. We study this paradigm using the simple yet effective Best-of-N (BoN) inference strategy, in which a verifier selects the best out of a set of LLM-generated responses. We devise the first imitation learning and reinforcement learning~(RL) methods for BoN-aware fine-tuning, overcoming the challenging, non-differentiable argmax operator within BoN. We empirically demonstrate that our BoN-aware models implicitly learn a meta-strategy that interleaves best responses with more diverse responses that might be better suited to a test-time input -- a process reminiscent of the exploration-exploitation trade-off in RL. Our experiments demonstrate the effectiveness of BoN-aware fine-tuning in terms of improved performance and inference-time compute. In particular, we show that our methods improve the Bo32 performance of Gemma 2B on Hendrycks MATH from 26.8% to 30.8%, and pass@32 from 60.0% to 67.0%, as well as the pass@16 on HumanEval from 61.6% to 67.1%.
Insights into heart failure metabolite markers through explainable machine learning
Pamela Mehanna
Caroline Daneault
Leslie Hausermann
David Busseuil
Jean-Claude Tardif
Jocelyn Dupuis
Christine Des Rosiers
Matthieu Ruiz
Julie G. Hussin
Understanding molecular traits through metabolomics offers an avenue to tailor cardiovascular prevention, diagnosis and treatment strategies… (see more) more effectively. This study focuses on the application of machine learning (ML) and explainable artificial intelligence (XAI) algorithms to detect discriminant molecular signatures in heart failure (HF). We aim to uncover metabolites with significant predictive value by analyzing targeted metabolomics data through ML and XAI algorithms. After quality control, we analyzed 55 metabolites from 124 plasma samples, including 53 HF patients and 71 controls, comparing Ridge Logistic Regression, Support Vector Machine and eXtreme Gradient Boosting models. All achieved high accuracy in predicting group labels: 84.0% [95% CI: 75.3 — 92.7], 85.73 [95% CI: 78.6 — 92.9], and 84.8% [95% CI: 76.1 – 93.5], respectively. Permutation-based variable importance and Local Interpretable Model-agnostic Explanations (LIME) were used for group-level and individual-level explainability, respectively, complemented by H-Friedman statistics for variable interactions, yielding reliable, explainable insights of the ML models. Metabolites well-known for their association with HF, such as glucose and cholesterol, and more recently described, the C18:1 carnitine, were reaffirmed in our analysis. The novel discovery of lignoceric acid (C24:0 fatty acid) as a critical discriminator, was confirmed in a replication cohort, underscoring its potential as a metabolite marker. Furthermore, our study highlights the utility of 2-way variable interaction analysis in unveiling a network of metabolite interactions essential for accurate disease prediction. The results demonstrate our approach's efficacy in identifying key metabolites and their interactions, illustrating the power of ML and XAI in advancing personalized healthcare solutions.
Instant3dit: Multiview Inpainting for Fast Editing of 3D Objects
Amir Barda
Matheus Gadelha
Vladimir Kim
Amit H. Bermano
Thibault Groueix
We propose a generative technique to edit 3D shapes, represented as meshes, NeRFs, or Gaussian Splats, in approximately 3 seconds, without t… (see more)he need for running an SDS type of optimization. Our key insight is to cast 3D editing as a multiview image inpainting problem, as this representation is generic and can be mapped back to any 3D representation using the bank of available Large Reconstruction Models. We explore different fine-tuning strategies to obtain both multiview generation and inpainting capabilities within the same diffusion model. In particular, the design of the inpainting mask is an important factor of training an inpainting model, and we propose several masking strategies to mimic the types of edits a user would perform on a 3D shape. Our approach takes 3D generative editing from hours to seconds and produces higher-quality results compared to previous works.
Integer Programming Games: A Gentle Computational Overview
Gabriele Dragotto
Andrea Lodi
Sriram Sankaranarayan
Integrating Generative and Experimental Platforms for Biomolecular Design
Cheng-Hao Liu
Soojung Yang
Sidney L Lisanza
Francesca-Zhoufan Li
Hannes Stärk
Jacob Gershon
Lauren Hong
Pranam Chatterjee
Tommi Jaakkola
Regina Barzilay
David Baker
Frances H. Arnold
Biomolecular design, through artificial engineering of proteins, ligands, and nucleic acids, holds immense promise in addressing pressing me… (see more)dical, industrial, and environmental challenges. While generative machine learning has shown significant potential in this area, a palpable disconnect exists with experimental biology: many ML research efforts prioritize static benchmark performance, potentially sidelining impactful biological applications. This workshop seeks to bridge this gap by bringing computationalists and experimentalists together, catalyzing a deeper interdisciplinary discourse. Together, we will explore the strengths and challenges of generative ML in biology, experimental integration of generative ML, and biological problems ready for ML. To attract high-quality and diverse research, we partnered with Nature Biotechnology for a special collection, and we created dedicated tracks for in-silico ML research and hybrid ML-experimental biology research. Our lineup features emerging leaders as speakers and renowned scientists as panelists, encapsulating a spectrum from high-throughput experimentation and computational biology to generative ML. With a diverse organizing team and backed by industry sponsors, we dedicate the workshop to pushing the boundaries of ML's role in biology.
International AI Safety Report
Bronwyn Fox
André Carlos Ponce de Leon Ferreira de Carvalho
Mona Nemer
Raquel Pezoa Rivera
Yi Zeng
Juha Heikkilä
Guillaume Avrin
Antonio Krüger
Balaraman Ravindran
Hammam Riza
Ciarán Seoighe
Ziv Katzir
Andrea Monti
Hiroaki Kitano
Nusu Mwamanzi
Fahad Albalawi
José Ramón López Portillo
Haroon Sheikh
Gill Jolly … (see 86 more)
Olubunmi Ajala
Jerry Sheehan
Dominic Vincent Ligot
Kyoung Mu Lee
Crystal Rugege
Denise Wong
Nuria Oliver
Christian Busch
Ahmet Halit Hatip
Oleksii Molchanovskyi
Marwan Alserkal
Chris Johnson
Amandeep Singh Gill
Saif M. Khan
Daniel Privitera
Tamay Besiroglu
Rishi Bommasani
Stephen Casper
Yejin Choi
Philip Fox
Ben Garfinkel
Danielle Goldfarb
Hoda Heidari
Anson Ho
Sayash Kapoor
Leila Khalatbari
Shayne Longpre
Sam Manning
Vasilios Mavroudis
Mantas Mazeika
Julian Michael
Jessica Newman
Kwan Yee Ng
Chinasa T. Okolo
Deborah Raji
Girish Sastry
Elizabeth Seger
Theodora Skeadas
Tobin South
Daron Acemoglu
Olubayo Adekanmbi
David Dalrymple
Thomas G. Dietterich
Edward W. Felten
Pascale Fung
Pierre-Olivier Gourinchas
Fredrik Heintz
Geoffrey Hinton
Nick Jennings
Andreas Krause
Susan Leavy
Percy Liang
Teresa Ludermir
Vidushi Marda
Emma Strubell
Florian Tramèr
Lucia Velasco
Nicole Wheeler
Helen Margetts
John McDermid
Jane Munga
Arvind Narayanan
Alondra Nelson
Clara Neppel
Alice Oh
Gopal Ramchurn
Stuart Russell
Marietje Schaake
Bernhard Schölkopf
Dawn Song
Alvaro Soto
Lee Tiedrich
Andrew Yao
Ya-Qin Zhang
Baran Acar
Ben Clifford
Lambrini Das
Claire Dennis
Freya Hempleman
Hannah Merchant
Rian Overy
Ben Snodin
Benjamin Prud’homme
The first International AI Safety Report comprehensively synthesizes the current evidence on the capabilities, risks, and safety of advanced… (see more) AI systems. The report was mandated by the nations attending the AI Safety Summit in Bletchley, UK. Thirty nations, the UN, the OECD, and the EU each nominated a representative to the report's Expert Advisory Panel. A total of 100 AI experts contributed, representing diverse perspectives and disciplines. Led by the report's Chair, these independent experts collectively had full discretion over the report's content.
Investigating Generalization Behaviours of Generative Flow Networks
Generative Flow Networks (GFlowNets, GFNs) are a generative framework for learning unnormalized probability mass functions over discrete spa… (see more)ces. Since their inception, GFlowNets have proven to be useful for learning generative models in applications where the majority of the discrete space is unvisited during training. This has inspired some to hypothesize that GFlowNets, when paired with deep neural networks (DNNs), have favourable generalization properties. In this work, we empirically verify some of the hypothesized mechanisms of generalization of GFlowNets. In particular, we find that the functions that GFlowNets learn to approximate have an implicit underlying structure which facilitate generalization. We also find that GFlowNets are sensitive to being trained offline and off-policy; however, the reward implicitly learned by GFlowNets is robust to changes in the training distribution.