ICLR 2025 Workshop on Tackling Climate Change with Machine Learning: Data-Centric Approaches in ML for Climate Action
Konstantin Klemmer
Melissa Chapman
Lily Xu
Poon Kin Ho
Mélisande Teng
Patrick Emami
Climate change is one of the greatest problems society has ever faced, with increasingly severe consequences for humanity as natural disaste… (see more)rs multiply, sea levels rise, and ecosystems falter. While no silver bullet, machine learning can be an invaluable tool in fighting climate change via a wide array of applications and techniques, from designing smart electric grids to tracking greenhouse gas emissions through satellite imagery. These applications require algorithmic innovations in machine learning and close collaboration with diverse fields and practitioners. This workshop is intended as a forum for those in the global machine learning community who wish to help tackle climate change, and is further aimed to help foster cross-pollination between researchers in machine learning and experts in complementary climate-relevant fields. Building on our past workshops on this topic, this workshop particularly aims to explore data-centric ML approaches for climate action. Data-centric ML is not only a timely topic within the ICLR community, as analyzing and engineering (pre)training datasets becomes increasingly important, but holds specific challenges and opportunities in climate-related areas. We also want to take the opportunity of ICLR being hosted in Singapore to engage with local communities and shine a light on work that deploys, analyzes or critiques ML methods and their use for climate change adaptation and mitigation on the Asian continent.
An identification of models to help in the design of national strategies and policies to reduce greenhouse gas emissions.
Danielle Maia de Souza
Radhwane Boukelouha
Catherine Morency
Normand Mousseau
Martin Trépanier
An identification of models to help in the design of national strategies and policies to reduce greenhouse gas emissions.
Danielle Maia de Souza
Radhwane Boukelouha
Catherine Morency
Normand Mousseau
Martin Trépanier
Inference-Aware Fine-Tuning for Best-of-N Sampling in Large Language Models
Yinlam Chow
Guy Tennenholtz
Izzeddin Gur
Vincent Zhuang
Bo Dai
Sridhar Thiagarajan
Craig Boutilier
Aviral Kumar
Aleksandra Faust
Recent studies have indicated that effectively utilizing inference-time compute is crucial for attaining better performance from large langu… (see more)age models (LLMs). In this work, we propose a novel inference-aware fine-tuning paradigm, in which the model is fine-tuned in a manner that directly optimizes the performance of the inference-time strategy. We study this paradigm using the simple yet effective Best-of-N (BoN) inference strategy, in which a verifier selects the best out of a set of LLM-generated responses. We devise the first imitation learning and reinforcement learning~(RL) methods for BoN-aware fine-tuning, overcoming the challenging, non-differentiable argmax operator within BoN. We empirically demonstrate that our BoN-aware models implicitly learn a meta-strategy that interleaves best responses with more diverse responses that might be better suited to a test-time input -- a process reminiscent of the exploration-exploitation trade-off in RL. Our experiments demonstrate the effectiveness of BoN-aware fine-tuning in terms of improved performance and inference-time compute. In particular, we show that our methods improve the Bo32 performance of Gemma 2B on Hendrycks MATH from 26.8% to 30.8%, and pass@32 from 60.0% to 67.0%, as well as the pass@16 on HumanEval from 61.6% to 67.1%.
Integer Programming Games.
Gabriele Dragotto
Andrea Lodi
Sriram Sankaranarayanan 0002
Integer Programming Games.
Gabriele Dragotto
Andrea Lodi 0001
Sriram Sankaranarayanan 0002
Integrating Generative and Experimental Platforms for Biomolecular Design
Cheng-Hao Liu
Soojung Yang
Sidney L Lisanza
Francesca-Zhoufan Li
Hannes Stärk
Jacob Gershon
Lauren Hong
Pranam Chatterjee
Tommi Jaakkola
Regina Barzilay
David Baker
Frances H. Arnold
Biomolecular design, through artificial engineering of proteins, ligands, and nucleic acids, holds immense promise in addressing pressing me… (see more)dical, industrial, and environmental challenges. While generative machine learning has shown significant potential in this area, a palpable disconnect exists with experimental biology: many ML research efforts prioritize static benchmark performance, potentially sidelining impactful biological applications. This workshop seeks to bridge this gap by bringing computationalists and experimentalists together, catalyzing a deeper interdisciplinary discourse. Together, we will explore the strengths and challenges of generative ML in biology, experimental integration of generative ML, and biological problems ready for ML. To attract high-quality and diverse research, we partnered with Nature Biotechnology for a special collection, and we created dedicated tracks for in-silico ML research and hybrid ML-experimental biology research. Our lineup features emerging leaders as speakers and renowned scientists as panelists, encapsulating a spectrum from high-throughput experimentation and computational biology to generative ML. With a diverse organizing team and backed by industry sponsors, we dedicate the workshop to pushing the boundaries of ML's role in biology.
Investigating Generalization Behaviours of Generative Flow Networks
Lazar Atanackovic
Generative Flow Networks (GFlowNets, GFNs) are a generative framework for learning unnormalized probability mass functions over discrete spa… (see more)ces. Since their inception, GFlowNets have proven to be useful for learning generative models in applications where the majority of the discrete space is unvisited during training. This has inspired some to hypothesize that GFlowNets, when paired with deep neural networks (DNNs), have favourable generalization properties. In this work, we empirically verify some of the hypothesized mechanisms of generalization of GFlowNets. In particular, we find that the functions that GFlowNets learn to approximate have an implicit underlying structure which facilitate generalization. We also find that GFlowNets are sensitive to being trained offline and off-policy; however, the reward implicitly learned by GFlowNets is robust to changes in the training distribution.
Investigating the Effect of Providing Required Training to Mothers of Children with Surgery and Its Effect on Mothers' Anxiety
Julia Ferreira
Nadia Safa
Fabio Botelho
Robin Petroze
Hussein Wissanji
Pramod Puligandla
Kenneth Shaw
Maeve Trudeau
Elena Guadagno
Jean-Martin Laberge
Sherif Emil
Investigating the Effect of Providing Required Training to Mothers of Children with Surgery and Its Effect on Mothers' Anxiety
Julia Ferreira
Nadia Safa
Fabio Botelho
Robin Petroze
Hussein Wissanji
Pramod Puligandla
Kenneth Shaw
Maeve Trudeau
Elena Guadagno
Jean Martin Laberge
Sherif Emil
A Learning-Based Framework for Fair and Scalable Solution Generation in Kidney Exchange Problems
Longitudinal reproducibility of brain and spinal cord quantitative MRI biomarkers
Mathieu Boudreau
Agah Karakuzu
Arnaud Boré
Basile Pinsard
Kiril Zelenkovski
Eva Alonso‐Ortiz
Julie Boyle
Abstract Quantitative MRI (qMRI) promises better specificity, accuracy, repeatability, and reproducibility relative to its clinically-used q… (see more)ualitative MRI counterpart. Longitudinal reproducibility is particularly important in qMRI. The goal is to reliably quantify tissue properties that may be assessed in longitudinal clinical studies throughout disease progression or during treatment. In this work, we present the initial data release of the quantitative MRI portion of the Courtois project on neural modelling (CNeuroMod), where the brain and cervical spinal cord of six participants were scanned at regular intervals over the course of several years. This first release includes 3 years of data collection and up to 10 sessions per participant using quantitative MRI imaging protocols (T1, magnetization transfer (MTR, MTsat), and diffusion). In the brain, T1MP2RAGE, fractional anisotropy (FA), mean diffusivity (MD), and radial diffusivity (RD) all exhibited high longitudinal reproducibility (intraclass correlation coefficient – ICC ≃ 1 and within-subject coefficient of variations – wCV 1%). The spinal cord cross-sectional area (CSA) computed using T2w images and T1MTsat exhibited the best longitudinal reproducibility (ICC ≃ 1 and 0.7 respectively, and wCV 2.4% and 6.9%). Results from this work show the level of longitudinal reproducibility that can be expected from qMRI protocols in the brain and spinal cord in the absence of hardware and software upgrades, and could help in the design of future longitudinal clinical studies.