Portrait of David Rolnick

David Rolnick

Core Academic Member
Canada CIFAR AI Chair
Assistant Professor, McGill University, School of Computer Science
Adjunct Professor, Université de Montréal, Department of Computer Science and Operations Research
Research Topics
AI and Sustainability
AI for Science
Applied Machine Learning
Biodiversity
Building Energy Management Systems
Climate
Climate Change
Climate Change AI
Climate Modeling
Climate Science
Climate Variable Downscaling
Computer Vision
Conservation Technology
Energy Systems
Forest Monitoring
Machine Learning and Climate Change
Machine Learning for Physical Sciences
Machine Learning in Climate Modeling
Machine Learning Theory
Out-of-Distribution (OOD) Detection
Remote Sensing
Satellite Remote Sensing
Time Series Forecasting
Vegetation

Biography

David Rolnick is an assistant professor at McGill University’s School of Computer Science, a core academic member of Mila – Quebec Artificial Intelligence Institute and holds a Canada CIFAR AI Chair. Rolnick’s work focuses on applications of machine learning to help address climate change. He is the co-founder and chair of Climate Change AI, and scientific co-director of Sustainability in the Digital Age. After completing his PhD in applied mathematics at the Massachusetts Institute of Technology (MIT), he was a NSF Mathematical Sciences Postdoctoral Research Fellow, an NSF Graduate Research Fellow and a Fulbright Scholar. He was named to MIT Technology Review’s “35 Innovators Under 35” in 2021.

Current Students

Collaborating researcher
Collaborating Alumni - McGill University
Collaborating researcher - Cambridge University
Co-supervisor :
Postdoctorate - McGill University
Collaborating researcher - McGill University
Collaborating researcher - N/A
Co-supervisor :
Master's Research - McGill University
Collaborating researcher - Leipzig University
Master's Research - McGill University
Collaborating researcher
Collaborating researcher
Collaborating researcher
Independent visiting researcher - Politecnico di Milano
Independent visiting researcher
Collaborating researcher - Université de Montréal
Collaborating researcher - Johannes Kepler University
Collaborating researcher - University of Amsterdam
Master's Research - McGill University
PhD - McGill University
PhD - McGill University
Independent visiting researcher - Université de Montréal
Collaborating researcher - Polytechnique Montréal Montréal
Principal supervisor :
Collaborating researcher - University of East Anglia
Collaborating researcher
Collaborating researcher - Columbia university
Postdoctorate - McGill University
Co-supervisor :
Collaborating researcher - University of Waterloo
Co-supervisor :
Collaborating Alumni - Université de Montréal
Master's Research - McGill University
Collaborating researcher - Columbia university
Master's Research - McGill University
Collaborating researcher - University of Tübingen
Collaborating researcher - Karlsruhe Institute of Technology
PhD - McGill University
Postdoctorate - Université de Montréal
Principal supervisor :
Collaborating researcher
PhD - McGill University
Collaborating Alumni - McGill University

Publications

Tackling Climate Change with Machine Learning: Fostering the Maturity of ML Applications for Climate Change
Shiva Madadkhani
Olivia Mendivil Ramos
Millie Chapman
Jesse Dunietz
Dataset Difficulty and the Role of Inductive Bias
Motivated by the goals of dataset pruning and defect identification, a growing body of methods have been developed to score individual examp… (see more)les within a dataset. These methods, which we call"example difficulty scores", are typically used to rank or categorize examples, but the consistency of rankings between different training runs, scoring methods, and model architectures is generally unknown. To determine how example rankings vary due to these random and controlled effects, we systematically compare different formulations of scores over a range of runs and model architectures. We find that scores largely share the following traits: they are noisy over individual runs of a model, strongly correlated with a single notion of difficulty, and reveal examples that range from being highly sensitive to insensitive to the inductive biases of certain model architectures. Drawing from statistical genetics, we develop a simple method for fingerprinting model architectures using a few sensitive examples. These findings guide practitioners in maximizing the consistency of their scores (e.g. by choosing appropriate scoring methods, number of runs, and subsets of examples), and establishes comprehensive baselines for evaluating scores in the future.
Dataset Difficulty and the Role of Inductive Bias
Motivated by the goals of dataset pruning and defect identification, a growing body of methods have been developed to score individual examp… (see more)les within a dataset. These methods, which we call"example difficulty scores", are typically used to rank or categorize examples, but the consistency of rankings between different training runs, scoring methods, and model architectures is generally unknown. To determine how example rankings vary due to these random and controlled effects, we systematically compare different formulations of scores over a range of runs and model architectures. We find that scores largely share the following traits: they are noisy over individual runs of a model, strongly correlated with a single notion of difficulty, and reveal examples that range from being highly sensitive to insensitive to the inductive biases of certain model architectures. Drawing from statistical genetics, we develop a simple method for fingerprinting model architectures using a few sensitive examples. These findings guide practitioners in maximizing the consistency of their scores (e.g. by choosing appropriate scoring methods, number of runs, and subsets of examples), and establishes comprehensive baselines for evaluating scores in the future.
Application-Driven Innovation in Machine Learning
Alan Aspuru-Guzik
Sara Beery
Bistra Dilkina
Priya L. Donti
Marzyeh Ghassemi
Hannah Kerner
Claire Monteleoni
Esther Rolf
Milind Tambe
Adam White
As applications of machine learning proliferate, innovative algorithms inspired by specific real-world challenges have become increasingly i… (see more)mportant. Such work offers the potential for significant impact not merely in domains of application but also in machine learning itself. In this paper, we describe the paradigm of application-driven research in machine learning, contrasting it with the more standard paradigm of methods-driven research. We illustrate the benefits of application-driven machine learning and how this approach can productively synergize with methods-driven work. Despite these benefits, we find that reviewing, hiring, and teaching practices in machine learning often hold back application-driven innovation. We outline how these processes may be improved.
Linear Weight Interpolation Leads to Transient Performance Gains
PhAST: Physics-Aware, Scalable, and Task-specific GNNs for Accelerated Catalyst Design
Simultaneous linear connectivity of neural networks modulo permutation
Ekansh Sharma
Tom Denton
Daniel M. Roy
A landmark environmental law looks ahead
Robert L. Fischman
J. B. Ruhl
Brenna R. Forester
Tanya M. Lama
Marty Kardos
Grethel Aguilar Rojas
Nicholas A. Robinson
Patrick D. Shirey
Gary A. Lamberti
Amy W. Ando
Stephen Palumbi
Michael Wara
Mark W. Schwartz
Matthew A. Williamson
Tanya Berger-Wolf
Sara Beery
Justin Kitzes
David Thau
Devis Tuia … (see 8 more)
Daniel Rubenstein
Caleb R. Hickman
Julie Thorstenson
Gregory E. Kaebnick
James P. Collins
Athmeya Jayaram
Thomas Deleuil
Ying Zhao
FoMo: Multi-Modal, Multi-Scale and Multi-Task Remote Sensing Foundation Models for Forest Monitoring
FoMo-Bench: a multi-modal, multi-scale and multi-task Forest Monitoring Benchmark for remote sensing foundation models
FoMo-Bench: a multi-modal, multi-scale and multi-task Forest Monitoring Benchmark for remote sensing foundation models
Forests are an essential part of Earth's ecosystems and natural systems, as well as providing services on which humanity depends, yet they a… (see more)re rapidly changing as a result of land use decisions and climate change. Understanding and mitigating negative effects requires parsing data on forests at global scale from a broad array of sensory modalities, and recently many such problems have been approached using machine learning algorithms for remote sensing. To date, forest-monitoring problems have largely been addressed in isolation. Inspired by the rise of foundation models for computer vision and remote sensing, we here present the first unified Forest Monitoring Benchmark (FoMo-Bench). FoMo-Bench consists of 15 diverse datasets encompassing satellite, aerial, and inventory data, covering a variety of geographical regions, and including multispectral, red-green-blue, synthetic aperture radar (SAR) and LiDAR data with various temporal, spatial and spectral resolutions. FoMo-Bench includes multiple types of forest-monitoring tasks, spanning classification, segmentation, and object detection. To further enhance the diversity of tasks and geographies represented in FoMo-Bench, we introduce a novel global dataset, TalloS, combining satellite imagery with ground-based annotations for tree species classification, encompassing 1,000+ categories across multiple hierarchical taxonomic levels (species, genus, family). Finally, we propose FoMo-Net, a baseline foundation model with the capacity to process any combination of commonly used spectral bands in remote sensing, across diverse ground sampling distances and geographical locations worldwide. This work aims to inspire research collaborations between machine learning and forest biology researchers in exploring scalable multi-modal and multi-task models for forest monitoring. All code and data will be made publicly available.
Towards Causal Representations of Climate Model Data
Charlotte Emilie Elektra Lange
Yaniv Gurwicz
Peer Nowack
Climate models, such as Earth system models (ESMs), are crucial for simulating future climate change based on projected Shared Socioeconomic… (see more) Pathways (SSP) greenhouse gas emissions scenarios. While ESMs are sophisticated and invaluable, machine learning-based emulators trained on existing simulation data can project additional climate scenarios much faster and are computationally efficient. However, they often lack generalizability and interpretability. This work delves into the potential of causal representation learning, specifically the \emph{Causal Discovery with Single-parent Decoding} (CDSD) method, which could render climate model emulation efficient \textit{and} interpretable. We evaluate CDSD on multiple climate datasets, focusing on emissions, temperature, and precipitation. Our findings shed light on the challenges, limitations, and promise of using CDSD as a stepping stone towards more interpretable and robust climate model emulation.