Portrait de David Rolnick

David Rolnick

Membre académique principal
Chaire en IA Canada-CIFAR
Professeur adjoint, McGill University, École d'informatique
Professeur associé, Université de Montréal, Département d'informatique et de recherche opérationnelle

Biographie

David Rolnick est professeur adjoint et titulaire d’une chaire en IA Canada-CIFAR à l'École d'informatique de l'Université McGill et membre académique principal de Mila – Institut québécois d’intelligence artificielle. Ses travaux portent sur les applications de l'apprentissage automatique dans la lutte contre le changement climatique. Il est cofondateur et président de Climate Change AI et codirecteur scientifique de Sustainability in the Digital Age. David Rolnick a obtenu un doctorat en mathématiques appliquées du Massachusetts Institute of Technology (MIT). Il a été chercheur postdoctoral en sciences mathématiques à la National Science Foundation (NSF), chercheur diplômé à la NSF et boursier Fulbright. Il a figuré sur la liste des « 35 innovateurs de moins de 35 ans » de la MIT Technology Review en 2021.

Étudiants actuels

Postdoctorat - Université de Montréal
Superviseur⋅e principal⋅e :
Collaborateur·rice de recherche - Université Paris-Saclay
Co-superviseur⋅e :
Collaborateur·rice de recherche
Collaborateur·rice de recherche
Collaborateur·rice de recherche
Collaborateur·rice de recherche
Collaborateur·rice de recherche
Co-superviseur⋅e :
Maîtrise recherche - McGill University
Collaborateur·rice de recherche
Stagiaire de recherche - Johannes Kepler University
Postdoctorat - Université de Montréal
Superviseur⋅e principal⋅e :
Stagiaire de recherche
Stagiaire de recherche - Université de Montréal
Maîtrise recherche - McGill University
Stagiaire de recherche - Université de Montréal
Collaborateur·rice de recherche - Karlsruhe Institute of Technology
Collaborateur·rice de recherche
Stagiaire de recherche - Osnabrueck university
Maîtrise recherche - McGill University
Collaborateur·rice de recherche - McGill University
Collaborateur·rice de recherche - The University of Dresden, Helmholtz Centre for Environmental Research Leipzig
Collaborateur·rice de recherche - National Observatory of Athens
Collaborateur·rice de recherche
Collaborateur·rice de recherche - KU Leuven
Stagiaire de recherche - Cambridge University
Collaborateur·rice de recherche
Co-superviseur⋅e :
Postdoctorat - McGill University
Doctorat - Université de Montréal
Collaborateur·rice de recherche - RWTH Aachen University (Rheinisch-Westfälische Technische Hochschule Aachen)
Co-superviseur⋅e :
Maîtrise recherche - McGill University

Publications

On the importance of catalyst-adsorbate 3D interactions for relaxed energy predictions
Alvaro Carbonero
Alexandre AGM Duval
Victor Schmidt
Santiago Miret
Alex Hernandez-Garcia
The use of machine learning for material property prediction and discovery has traditionally centered on graph neural networks that incorpor… (voir plus)ate the geometric configuration of all atoms. However, in practice not all this information may be readily available, e.g.~when evaluating the potentially unknown binding of adsorbates to catalyst. In this paper, we investigate whether it is possible to predict a system's relaxed energy in the OC20 dataset while ignoring the relative position of the adsorbate with respect to the electro-catalyst. We consider SchNet, DimeNet++ and FAENet as base architectures and measure the impact of four modifications on model performance: removing edges in the input graph, pooling independent representations, not sharing the backbone weights and using an attention mechanism to propagate non-geometric relative information. We find that while removing binding site information impairs accuracy as expected, modified models are able to predict relaxed energies with remarkably decent MAE. Our work suggests future research directions in accelerated materials discovery where information on reactant configurations can be reduced or altogether omitted.
ClimateSet: A Large-Scale Climate Model Dataset for Machine Learning
Julia Kaltenborn
Charlotte Emilie Elektra Lange
Venkatesh Ramesh
Philippe Brouillard
Yaniv Gurwicz
Chandni Nagda
Jakob Runge
Peer Nowack
Climate models have been key for assessing the impact of climate change and simulating future climate scenarios. The machine learning (ML) c… (voir plus)ommunity has taken an increased interest in supporting climate scientists’ efforts on various tasks such as climate model emulation, downscaling, and prediction tasks. Many of those tasks have been addressed on datasets created with single climate models. However, both the climate science and ML communities have suggested that to address those tasks at scale, we need large, consistent, and ML-ready climate model datasets. Here, we introduce ClimateSet, a dataset containing the inputs and outputs of 36 climate models from the Input4MIPs and CMIP6 archives. In addition, we provide a modular dataset pipeline for retrieving and preprocessing additional climate models and scenarios. We showcase the potential of our dataset by using it as a benchmark for ML-based climate model emulation. We gain new insights about the performance and generalization capabilities of the different ML models by analyzing their performance across different climate models. Furthermore, the dataset can be used to train an ML emulator on several climate models instead of just one. Such a “super-emulator” can quickly project new climate change scenarios, complementing existing scenarios already provided to policymakers. We believe ClimateSet will create the basis needed for the ML community to tackle climate-related tasks at scale.
SatBird: a Dataset for Bird Species Distribution Modeling using Remote Sensing and Citizen Science Data
Mélisande Teng
Amna Elmustafa
Benjamin Akera
Hager Radi
Multi-variable Hard Physical Constraints for Climate Model Downscaling
Jose Gonz'alez-Abad
'Alex Hern'andez-Garc'ia
Paula Harder
Jos'e Manuel Guti'errez
FAENet: Frame Averaging Equivariant GNN for Materials Modeling
Alexandre AGM Duval
Victor Schmidt
Alex Hernandez-Garcia
Santiago Miret
Fragkiskos D. Malliaros
Applications of machine learning techniques for materials modeling typically involve functions known to be equivariant or invariant to speci… (voir plus)fic symmetries. While graph neural networks (GNNs) have proven successful in such tasks, they enforce symmetries via the model architecture, which often reduces their expressivity, scalability and comprehensibility. In this paper, we introduce (1) a flexible framework relying on stochastic frame-averaging (SFA) to make any model E(3)-equivariant or invariant through data transformations. (2) FAENet: a simple, fast and expressive GNN, optimized for SFA, that processes geometric information without any symmetrypreserving design constraints. We prove the validity of our method theoretically and empirically demonstrate its superior accuracy and computational scalability in materials modeling on the OC20 dataset (S2EF, IS2RE) as well as common molecular modeling tasks (QM9, QM7-X). A package implementation is available at https://faenet.readthedocs.io.
Hidden Symmetries of ReLU Networks
J. Grigsby
Elisenda Grigsby
Kathryn Lindsey
Maximal Initial Learning Rates in Deep ReLU Networks
Gaurav Iyer
Boris Hanin
Fourier Neural Operators for Arbitrary Resolution Climate Data Downscaling
Qidong Yang
Alex Hernandez-Garcia
Paula Harder
Venkatesh Ramesh
Prasanna Sattegeri
D. Szwarcman
C. Watson
Climate simulations are essential in guiding our understanding of climate change and responding to its effects. However, it is computational… (voir plus)ly expensive to resolve complex climate processes at high spatial resolution. As one way to speed up climate simulations, neural networks have been used to downscale climate variables from fast-running low-resolution simulations, but high-resolution training data are often unobtainable or scarce, greatly limiting accuracy. In this work, we propose a downscaling method based on the Fourier neural operator. It trains with data of a small upsampling factor and then can zero-shot downscale its input to arbitrary unseen high resolution. Evaluated both on ERA5 climate model data and on the Navier-Stokes equation solution data, our downscaling model significantly outperforms state-of-the-art convolutional and generative adversarial downscaling models, both in standard single-resolution downscaling and in zero-shot generalization to higher upsampling factors. Furthermore, we show that our method also outperforms state-of-the-art data-driven partial differential equation solvers on Navier-Stokes equations. Overall, our work bridges the gap between simulation of a physical process and interpolation of low-resolution output, showing that it is possible to combine both approaches and significantly improve upon each other.
Bird Distribution Modelling using Remote Sensing and Citizen Science data
Mélisande Teng
Amna Elmustafa
Benjamin Akera
Lightweight, Pre-trained Transformers for Remote Sensing Timeseries
Gabriel Tseng
Ruben Cartuyvels
Ivan Zvonkov
Mirali Purohit
Hannah Kerner
Machine learning methods for satellite data have a range of societally relevant applications, but labels used to train models can be difficu… (voir plus)lt or impossible to acquire. Self-supervision is a natural solution in settings with limited labeled data, but current self-supervised models for satellite data fail to take advantage of the characteristics of that data, including the temporal dimension (which is critical for many applications, such as monitoring crop growth) and availability of data from many complementary sensors (which can significantly improve a model's predictive performance). We present Presto (the Pretrained Remote Sensing Transformer), a model pre-trained on remote sensing pixel-timeseries data. By designing Presto specifically for remote sensing data, we can create a significantly smaller but performant model. Presto excels at a wide variety of globally distributed remote sensing tasks and performs competitively with much larger models while requiring far less compute. Presto can be used for transfer learning or as a feature extractor for simple models, enabling efficient deployment at scale.
Maximal Initial Learning Rates in Deep ReLU Networks
Gaurav Iyer
Boris Hanin
Training a neural network requires choosing a suitable learning rate, which involves a trade-off between speed and effectiveness of converge… (voir plus)nce. While there has been considerable theoretical and empirical analysis of how large the learning rate can be, most prior work focuses only on late-stage training. In this work, we introduce the maximal initial learning rate
Semi-Supervised Object Detection for Agriculture
Gabriel Tseng
Krisztina Sinkovics
Tom Watsham
Thomas C. Walters