Portrait de David Rolnick

David Rolnick

Membre académique principal
Chaire en IA Canada-CIFAR
Professeur adjoint, McGill University, École d'informatique
Professeur associé, Université de Montréal, Département d'informatique et de recherche opérationnelle
Sujets de recherche
Apprentissage automatique appliqué
Apprentissage automatique dans la modélisation climatique
Apprentissage automatique et changement climatique
Apprentissage automatique pour les sciences physiques
Biodiversité
Changement climatique
Climat
Détection hors distribution (OOD)
IA et durabilité
IA pour la science
IA pour le changement climatique
Modélisation climatique
Prévision des séries temporelles
Réduction d'échelle des variables climatiques
Science du climat
Surveillance des forêts
Systèmes de gestion de l'énergie des bâtiments
Systèmes énergétiques
Technologie de conservation
Télédétection
Télédétection par satellite
Théorie de l'apprentissage automatique
Végétation
Vision par ordinateur

Biographie

David Rolnick est professeur adjoint et titulaire d’une chaire en IA Canada-CIFAR à l'École d'informatique de l'Université McGill et membre académique principal de Mila – Institut québécois d’intelligence artificielle. Ses travaux portent sur les applications de l'apprentissage automatique dans la lutte contre le changement climatique. Il est cofondateur et président de Climate Change AI et codirecteur scientifique de Sustainability in the Digital Age. David Rolnick a obtenu un doctorat en mathématiques appliquées du Massachusetts Institute of Technology (MIT). Il a été chercheur postdoctoral en sciences mathématiques à la National Science Foundation (NSF), chercheur diplômé à la NSF et boursier Fulbright. Il a figuré sur la liste des « 35 innovateurs de moins de 35 ans » de la MIT Technology Review en 2021.

Étudiants actuels

Collaborateur·rice de recherche
Collaborateur·rice alumni - McGill
Collaborateur·rice de recherche - Cambridge University
Co-superviseur⋅e :
Postdoctorat - McGill
Collaborateur·rice de recherche - McGill
Collaborateur·rice de recherche
Collaborateur·rice de recherche - N/A
Co-superviseur⋅e :
Maîtrise recherche - McGill
Collaborateur·rice de recherche - Leipzig University
Maîtrise recherche - McGill
Collaborateur·rice de recherche
Collaborateur·rice de recherche
Collaborateur·rice de recherche
Visiteur de recherche indépendant - Politecnico di Milano
Visiteur de recherche indépendant
Collaborateur·rice de recherche - UdeM
Collaborateur·rice de recherche - Johannes Kepler University
Collaborateur·rice de recherche - University of Amsterdam
Maîtrise recherche - McGill
Collaborateur·rice de recherche
Visiteur de recherche indépendant - Université de Montréal
Collaborateur·rice de recherche - Polytechnique Montréal
Superviseur⋅e principal⋅e :
Collaborateur·rice de recherche - University of East Anglia
Collaborateur·rice de recherche
Collaborateur·rice de recherche - Columbia university
Postdoctorat - McGill
Co-superviseur⋅e :
Collaborateur·rice de recherche - University of Waterloo
Co-superviseur⋅e :
Collaborateur·rice alumni - UdeM
Maîtrise recherche - McGill
Collaborateur·rice de recherche - Columbia university
Maîtrise recherche - McGill
Collaborateur·rice de recherche - University of Tübingen
Collaborateur·rice de recherche - Karlsruhe Institute of Technology
Doctorat - McGill
Postdoctorat - UdeM
Superviseur⋅e principal⋅e :
Collaborateur·rice de recherche
Doctorat - McGill
Collaborateur·rice alumni - McGill

Publications

Tackling Climate Change with Machine Learning: Fostering the Maturity of ML Applications for Climate Change
Shiva Madadkhani
Olivia Mendivil Ramos
Millie Chapman
Jesse Dunietz
Dataset Difficulty and the Role of Inductive Bias
Motivated by the goals of dataset pruning and defect identification, a growing body of methods have been developed to score individual examp… (voir plus)les within a dataset. These methods, which we call"example difficulty scores", are typically used to rank or categorize examples, but the consistency of rankings between different training runs, scoring methods, and model architectures is generally unknown. To determine how example rankings vary due to these random and controlled effects, we systematically compare different formulations of scores over a range of runs and model architectures. We find that scores largely share the following traits: they are noisy over individual runs of a model, strongly correlated with a single notion of difficulty, and reveal examples that range from being highly sensitive to insensitive to the inductive biases of certain model architectures. Drawing from statistical genetics, we develop a simple method for fingerprinting model architectures using a few sensitive examples. These findings guide practitioners in maximizing the consistency of their scores (e.g. by choosing appropriate scoring methods, number of runs, and subsets of examples), and establishes comprehensive baselines for evaluating scores in the future.
Dataset Difficulty and the Role of Inductive Bias
Motivated by the goals of dataset pruning and defect identification, a growing body of methods have been developed to score individual examp… (voir plus)les within a dataset. These methods, which we call"example difficulty scores", are typically used to rank or categorize examples, but the consistency of rankings between different training runs, scoring methods, and model architectures is generally unknown. To determine how example rankings vary due to these random and controlled effects, we systematically compare different formulations of scores over a range of runs and model architectures. We find that scores largely share the following traits: they are noisy over individual runs of a model, strongly correlated with a single notion of difficulty, and reveal examples that range from being highly sensitive to insensitive to the inductive biases of certain model architectures. Drawing from statistical genetics, we develop a simple method for fingerprinting model architectures using a few sensitive examples. These findings guide practitioners in maximizing the consistency of their scores (e.g. by choosing appropriate scoring methods, number of runs, and subsets of examples), and establishes comprehensive baselines for evaluating scores in the future.
Application-Driven Innovation in Machine Learning
Alan Aspuru-Guzik
Sara Beery
Bistra Dilkina
Priya L. Donti
Marzyeh Ghassemi
Hannah Kerner
Claire Monteleoni
Esther Rolf
Milind Tambe
Adam White
As applications of machine learning proliferate, innovative algorithms inspired by specific real-world challenges have become increasingly i… (voir plus)mportant. Such work offers the potential for significant impact not merely in domains of application but also in machine learning itself. In this paper, we describe the paradigm of application-driven research in machine learning, contrasting it with the more standard paradigm of methods-driven research. We illustrate the benefits of application-driven machine learning and how this approach can productively synergize with methods-driven work. Despite these benefits, we find that reviewing, hiring, and teaching practices in machine learning often hold back application-driven innovation. We outline how these processes may be improved.
Linear Weight Interpolation Leads to Transient Performance Gains
PhAST: Physics-Aware, Scalable, and Task-specific GNNs for Accelerated Catalyst Design
Simultaneous linear connectivity of neural networks modulo permutation
Ekansh Sharma
Tom Denton
Daniel M. Roy
A landmark environmental law looks ahead
Robert L. Fischman
J. B. Ruhl
Brenna R. Forester
Tanya M. Lama
Marty Kardos
Grethel Aguilar Rojas
Nicholas A. Robinson
Patrick D. Shirey
Gary A. Lamberti
Amy W. Ando
Stephen Palumbi
Michael Wara
Mark W. Schwartz
Matthew A. Williamson
Tanya Berger-Wolf
Sara Beery
Justin Kitzes
David Thau
Devis Tuia … (voir 8 de plus)
Daniel Rubenstein
Caleb R. Hickman
Julie Thorstenson
Gregory E. Kaebnick
James P. Collins
Athmeya Jayaram
Thomas Deleuil
Ying Zhao
FoMo: Multi-Modal, Multi-Scale and Multi-Task Remote Sensing Foundation Models for Forest Monitoring
FoMo-Bench: a multi-modal, multi-scale and multi-task Forest Monitoring Benchmark for remote sensing foundation models
Forests are an essential part of Earth's ecosystems and natural systems, as well as providing services on which humanity depends, yet they a… (voir plus)re rapidly changing as a result of land use decisions and climate change. Understanding and mitigating negative effects requires parsing data on forests at global scale from a broad array of sensory modalities, and recently many such problems have been approached using machine learning algorithms for remote sensing. To date, forest-monitoring problems have largely been addressed in isolation. Inspired by the rise of foundation models for computer vision and remote sensing, we here present the first unified Forest Monitoring Benchmark (FoMo-Bench). FoMo-Bench consists of 15 diverse datasets encompassing satellite, aerial, and inventory data, covering a variety of geographical regions, and including multispectral, red-green-blue, synthetic aperture radar (SAR) and LiDAR data with various temporal, spatial and spectral resolutions. FoMo-Bench includes multiple types of forest-monitoring tasks, spanning classification, segmentation, and object detection. To further enhance the diversity of tasks and geographies represented in FoMo-Bench, we introduce a novel global dataset, TalloS, combining satellite imagery with ground-based annotations for tree species classification, encompassing 1,000+ categories across multiple hierarchical taxonomic levels (species, genus, family). Finally, we propose FoMo-Net, a baseline foundation model with the capacity to process any combination of commonly used spectral bands in remote sensing, across diverse ground sampling distances and geographical locations worldwide. This work aims to inspire research collaborations between machine learning and forest biology researchers in exploring scalable multi-modal and multi-task models for forest monitoring. All code and data will be made publicly available.
FoMo-Bench: a multi-modal, multi-scale and multi-task Forest Monitoring Benchmark for remote sensing foundation models
Towards Causal Representations of Climate Model Data
Charlotte Emilie Elektra Lange
Yaniv Gurwicz
Peer Nowack
Climate models, such as Earth system models (ESMs), are crucial for simulating future climate change based on projected Shared Socioeconomic… (voir plus) Pathways (SSP) greenhouse gas emissions scenarios. While ESMs are sophisticated and invaluable, machine learning-based emulators trained on existing simulation data can project additional climate scenarios much faster and are computationally efficient. However, they often lack generalizability and interpretability. This work delves into the potential of causal representation learning, specifically the \emph{Causal Discovery with Single-parent Decoding} (CDSD) method, which could render climate model emulation efficient \textit{and} interpretable. We evaluate CDSD on multiple climate datasets, focusing on emissions, temperature, and precipitation. Our findings shed light on the challenges, limitations, and promise of using CDSD as a stepping stone towards more interpretable and robust climate model emulation.