Portrait of David Rolnick

David Rolnick

Core Academic Member
Canada CIFAR AI Chair
Assistant Professor, McGill University, School of Computer Science
Adjunct Professor, Université de Montréal, Department of Computer Science and Operations Research
Research Topics
AI and Sustainability
AI for Science
Applied Machine Learning
Biodiversity
Building Energy Management Systems
Climate
Climate Change
Climate Change AI
Climate Modeling
Climate Science
Climate Variable Downscaling
Computer Vision
Conservation Technology
Energy Systems
Forest Monitoring
Machine Learning and Climate Change
Machine Learning for Physical Sciences
Machine Learning in Climate Modeling
Machine Learning Theory
Out-of-Distribution (OOD) Detection
Remote Sensing
Satellite Remote Sensing
Time Series Forecasting
Vegetation

Biography

David Rolnick is an assistant professor at McGill University’s School of Computer Science, a core academic member of Mila – Quebec Artificial Intelligence Institute and holds a Canada CIFAR AI Chair. Rolnick’s work focuses on applications of machine learning to help address climate change. He is the co-founder and chair of Climate Change AI, and scientific co-director of Sustainability in the Digital Age. After completing his PhD in applied mathematics at the Massachusetts Institute of Technology (MIT), he was a NSF Mathematical Sciences Postdoctoral Research Fellow, an NSF Graduate Research Fellow and a Fulbright Scholar. He was named to MIT Technology Review’s “35 Innovators Under 35” in 2021.

Current Students

Collaborating researcher
Collaborating Alumni - McGill University
Collaborating researcher - Cambridge University
Co-supervisor :
Postdoctorate - McGill University
Collaborating researcher - McGill University
Collaborating researcher - N/A
Co-supervisor :
PhD - McGill University
Collaborating researcher - Leipzig University
Master's Research - McGill University
Collaborating researcher
Collaborating researcher
Collaborating researcher
Independent visiting researcher - Politecnico di Milano
Independent visiting researcher
Collaborating researcher - Johannes Kepler University
Collaborating researcher - University of Amsterdam
Master's Research - McGill University
PhD - McGill University
PhD - McGill University
Independent visiting researcher - Université de Montréal
Collaborating researcher - Polytechnique Montréal Montréal
Collaborating researcher - University of East Anglia
Collaborating researcher
Collaborating researcher - Columbia university
Postdoctorate - McGill University
Collaborating researcher - University of Waterloo
Co-supervisor :
Collaborating Alumni - Université de Montréal
Master's Research - McGill University
Collaborating researcher - Columbia university
Master's Research - McGill University
Collaborating researcher - University of Tübingen
Independent visiting researcher - Karlsruhe Institute of Technology
Independent visiting researcher
Collaborating researcher - Karlsruhe Institute of Technology
PhD - McGill University
Collaborating Alumni - Université de Montréal
Principal supervisor :
Collaborating researcher
PhD - McGill University
Collaborating researcher - Technical University of Munich

Publications

Fourier Neural Operators for Arbitrary Resolution Climate Data Downscaling
Prasanna Sattegeri
D. Szwarcman
Campbell Watson
Climate simulations are essential in guiding our understanding of climate change and responding to its effects. However, it is computational… (see more)ly expensive to resolve complex climate processes at high spatial resolution. As one way to speed up climate simulations, neural networks have been used to downscale climate variables from fast-running low-resolution simulations, but high-resolution training data are often unobtainable or scarce, greatly limiting accuracy. In this work, we propose a downscaling method based on the Fourier neural operator. It trains with data of a small upsampling factor and then can zero-shot downscale its input to arbitrary unseen high resolution. Evaluated both on ERA5 climate model data and on the Navier-Stokes equation solution data, our downscaling model significantly outperforms state-of-the-art convolutional and generative adversarial downscaling models, both in standard single-resolution downscaling and in zero-shot generalization to higher upsampling factors. Furthermore, we show that our method also outperforms state-of-the-art data-driven partial differential equation solvers on Navier-Stokes equations. Overall, our work bridges the gap between simulation of a physical process and interpolation of low-resolution output, showing that it is possible to combine both approaches and significantly improve upon each other.
Bird Distribution Modelling using Remote Sensing and Citizen Science data
Mélisande Teng
Amna Elmustafa
Benjamin Akera
Lightweight, Pre-trained Transformers for Remote Sensing Timeseries
Ivan Zvonkov
Mirali Purohit
Hannah Kerner
Machine learning methods for satellite data have a range of societally relevant applications, but labels used to train models can be difficu… (see more)lt or impossible to acquire. Self-supervision is a natural solution in settings with limited labeled data, but current self-supervised models for satellite data fail to take advantage of the characteristics of that data, including the temporal dimension (which is critical for many applications, such as monitoring crop growth) and availability of data from many complementary sensors (which can significantly improve a model's predictive performance). We present Presto (the Pretrained Remote Sensing Transformer), a model pre-trained on remote sensing pixel-timeseries data. By designing Presto specifically for remote sensing data, we can create a significantly smaller but performant model. Presto excels at a wide variety of globally distributed remote sensing tasks and performs competitively with much larger models while requiring far less compute. Presto can be used for transfer learning or as a feature extractor for simple models, enabling efficient deployment at scale.
Semi-Supervised Object Detection for Agriculture
Krisztina Sinkovics
Tom Watsham
Thomas C. Walters
Bugs in the Data: How ImageNet Misrepresents Biodiversity
Alexandra Luccioni
ImageNet-1k is a dataset often used for benchmarking machine learning (ML) models and evaluating tasks such as image recognition and object … (see more)detection. Wild animals make up 27% of ImageNet-1k but, unlike classes representing people and objects, these data have not been closely scrutinized. In the current paper, we analyze the 13,450 images from 269 classes that represent wild animals in the ImageNet-1k validation set, with the participation of expert ecologists. We find that many of the classes are ill-defined or overlapping, and that 12% of the images are incorrectly labeled, with some classes having >90% of images incorrect. We also find that both the wildlife-related labels and images included in ImageNet-1k present significant geographical and cultural biases, as well as ambiguities such as artificial animals, multiple species in the same image, or the presence of humans. Our findings highlight serious issues with the extensive use of this dataset for evaluating ML systems, the use of such algorithms in wildlife-related tasks, and more broadly the ways in which ML datasets are commonly created and curated.
Deep Networks as Paths on the Manifold of Neural Representations
Richard D Lange
Jordan Kyle Matelsky
Xinyue Wang
Konrad Paul Kording
General Purpose AI Systems in the AI Act: Trying to Fit a Square Peg Into a Round Hole
Claire Boine
Hard-Constrained Deep Learning for Climate Downscaling
Prasanna Sattegeri
D. Szwarcman
Campbell Watson
The availability of reliable, high-resolution climate and weather data is important to inform long-term decisions on climate adaptation and … (see more)mitigation and to guide rapid responses to extreme events. Forecasting models are limited by computational costs and, therefore, often generate coarse-resolution predictions. Statistical downscaling, including super-resolution methods from deep learning, can provide an efficient method of upsampling low-resolution data. However, despite achieving visually compelling results in some cases, such models frequently violate conservation laws when predicting physical variables. In order to conserve physical quantities, here we introduce methods that guarantee statistical constraints are satisfied by a deep learning downscaling model, while also improving their performance according to traditional metrics. We compare different constraining approaches and demonstrate their applicability across different neural architectures as well as a variety of climate and weather data sets. Besides enabling faster and more accurate climate predictions through downscaling, we also show that our novel methodologies can improve super-resolution for satellite data and natural images data sets.
Normalization Layers Are All That Sharpness-Aware Minimization Needs
Maximilian Müller
Matthias Hein
Sharpness-aware minimization (SAM) was proposed to reduce sharpness of minima and has been shown to enhance generalization performance in va… (see more)rious settings. In this work we show that perturbing only the affine normalization parameters (typically comprising 0.1% of the total parameters) in the adversarial step of SAM can outperform perturbing all of the parameters.This finding generalizes to different SAM variants and both ResNet (Batch Normalization) and Vision Transformer (Layer Normalization) architectures. We consider alternative sparse perturbation approaches and find that these do not achieve similar performance enhancement at such extreme sparsity levels, showing that this behaviour is unique to the normalization layers. Although our findings reaffirm the effectiveness of SAM in improving generalization performance, they cast doubt on whether this is solely caused by reduced sharpness.
Digitalization and the Anthropocene
Felix Creutzig
Daron Acemoglu
Xuemei Bai
Paul N. Edwards
Marie Josefine Hintz
Lynn H. Kaack
Siir Kilkis
Stefanie Kunkel
Amy Luers
Nikola Milojevic-Dupont
Dave Rejeski
Jürgen Renn
Christoph Rosol
Daniela Russ
Thomas Turnbull
Elena Verdolini
Felix Wagner
Charlie Wilson
Aicha Zekar … (see 1 more)
Marius Zumwald
Great claims have been made about the benefits of dematerialization in a digital service economy. However, digitalization has historically i… (see more)ncreased environmental impacts at local and planetary scales, affecting labor markets, resource use, governance, and power relationships. Here we study the past, present, and future of digitalization through the lens of three interdependent elements of the Anthropocene: ( a) planetary boundaries and stability, ( b) equity within and between countries, and ( c) human agency and governance, mediated via ( i) increasing resource efficiency, ( ii) accelerating consumption and scale effects, ( iii) expanding political and economic control, and ( iv) deteriorating social cohesion. While direct environmental impacts matter, the indirect and systemic effects of digitalization are more profoundly reshaping the relationship between humans, technosphere and planet. We develop three scenarios: planetary instability, green but inhumane, and deliberate for the good. We conclude with identifying leverage points that shift human–digital–Earth interactions toward sustainability.
A portrait of the different configurations between digitally-enabled innovations and climate governance
Pierre J. C. Chuard
Jennifer Garard
Karsten A. Schulz
Nilushi Kumarasinghe
Damon Matthews
Neural Networks as Paths through the Space of Representations
Richard D Lange
Jordan Kyle Matelsky
Xinyue Wang
Konrad Paul Kording