Portrait de David Rolnick

David Rolnick

Membre académique principal
Chaire en IA Canada-CIFAR
Professeur adjoint, McGill University, École d'informatique
Professeur associé, Université de Montréal, Département d'informatique et de recherche opérationnelle
Sujets de recherche
Apprentissage automatique appliqué
Apprentissage automatique dans la modélisation climatique
Apprentissage automatique et changement climatique
Apprentissage automatique pour les sciences physiques
Biodiversité
Changement climatique
Climat
Détection hors distribution (OOD)
IA et durabilité
IA pour la science
IA pour le changement climatique
Modélisation climatique
Prévision des séries temporelles
Réduction d'échelle des variables climatiques
Science du climat
Surveillance des forêts
Systèmes de gestion de l'énergie des bâtiments
Systèmes énergétiques
Technologie de conservation
Télédétection
Télédétection par satellite
Théorie de l'apprentissage automatique
Végétation
Vision par ordinateur

Biographie

David Rolnick est professeur adjoint et titulaire d’une chaire en IA Canada-CIFAR à l'École d'informatique de l'Université McGill et membre académique principal de Mila – Institut québécois d’intelligence artificielle. Ses travaux portent sur les applications de l'apprentissage automatique dans la lutte contre le changement climatique. Il est cofondateur et président de Climate Change AI et codirecteur scientifique de Sustainability in the Digital Age. David Rolnick a obtenu un doctorat en mathématiques appliquées du Massachusetts Institute of Technology (MIT). Il a été chercheur postdoctoral en sciences mathématiques à la National Science Foundation (NSF), chercheur diplômé à la NSF et boursier Fulbright. Il a figuré sur la liste des « 35 innovateurs de moins de 35 ans » de la MIT Technology Review en 2021.

Étudiants actuels

Collaborateur·rice de recherche
Collaborateur·rice alumni - McGill
Collaborateur·rice de recherche - Cambridge University
Postdoctorat - McGill
Collaborateur·rice de recherche - McGill
Collaborateur·rice de recherche - N/A
Doctorat - McGill
Collaborateur·rice de recherche - Leipzig University
Maîtrise recherche - McGill
Collaborateur·rice de recherche
Collaborateur·rice de recherche
Collaborateur·rice de recherche
Visiteur de recherche indépendant - Politecnico di Milano
Visiteur de recherche indépendant
Collaborateur·rice de recherche - Johannes Kepler University
Collaborateur·rice de recherche - University of Amsterdam
Maîtrise recherche - McGill
Visiteur de recherche indépendant - Université de Montréal
Collaborateur·rice de recherche - Polytechnique Montréal
Superviseur⋅e principal⋅e :
Collaborateur·rice de recherche - University of East Anglia
Collaborateur·rice de recherche
Collaborateur·rice de recherche - Columbia university
Postdoctorat - McGill
Co-superviseur⋅e :
Collaborateur·rice de recherche - University of Waterloo
Collaborateur·rice alumni - UdeM
Maîtrise recherche - McGill
Collaborateur·rice de recherche - Columbia university
Maîtrise recherche - McGill
Collaborateur·rice de recherche - University of Tübingen
Visiteur de recherche indépendant
Collaborateur·rice de recherche - Karlsruhe Institute of Technology
Doctorat - McGill
Collaborateur·rice alumni - UdeM
Collaborateur·rice de recherche
Doctorat - McGill
Collaborateur·rice de recherche - Technical University of Munich

Publications

Galileo: Learning Global&Local Features of Many Remote Sensing Modalities
Anthony Fuller
Henry Herzog
Patrick Beukema
Favyen Bastani
James R Green
Evan Shelhamer
Hannah Kerner
We introduce a highly multimodal transformer to represent many remote sensing modalities - multispectral optical, synthetic aperture radar, … (voir plus)elevation, weather, pseudo-labels, and more - across space and time. These inputs are useful for diverse remote sensing tasks, such as crop mapping and flood detection. However, learning shared representations of remote sensing data is challenging, given the diversity of relevant data modalities, and because objects of interest vary massively in scale, from small boats (1-2 pixels and fast) to glaciers (thousands of pixels and slow). We present a novel self-supervised learning algorithm that extracts multi-scale features across a flexible set of input modalities through masked modeling. Our dual global and local contrastive losses differ in their targets (deep representations vs. shallow input projections) and masking strategies (structured vs. not). Our Galileo is a single generalist model that outperforms SoTA specialist models for satellite images and pixel time series across eleven benchmarks and multiple tasks.
Using Image-based Al for insect monitoring and conservation - InsectAI COST Action
Tom August
Mario V Balzan
Paul Bodesheim
Gunnar Brehm
Lisette Cantú-Salazar
Sílvia Castro
Joseph Chipperfield
Guillaume Ghisbain
Alba Gomez-Segura
Jérémie Goulnik
Quentin Groom
Laurens Hogeweg
Chantal Huijbers
Andreas Kamilaris
Karolis Kazlauskis
Wouter Koch
Dimitri Korsch
João Loureiro
Youri Martin
Angeliki F Martinou … (voir 27 de plus)
Kent McFarland
Xavier Mestdagh
Denis Michez
Charlie Outhwaite
Luca Pegoraro
Nadja Pernat
Lars B. Pettersson
Pavel Pipek
Cristina Preda
Tobias Roth
David B Roy
Helen Roy
Veljo Runnel
Martina Sasic
Dmitry Schigel
Julie Koch Sheard
Cecilie Svenningsen
Heliana Teixeira
Nicolas Titeux
Thomas Tscheulin
Elli Tzirkalli
Marijn van der Velde
Roel van Klink
Nicolas J Vereecken
Sarah Vray
Toke Thomas Høye
The InsectAI COST action will support insect monitoring and conservation at the national and continental scale in order to understand and co… (voir plus)unteract widespread insect declines. The Action will bring together a critical mass of researchers and stakeholders in image-based insect AI technologies to direct and drive the research agenda, build research capacity across Europe and support innovation and application. There is mounting evidence that populations of insects around the world are in sharp decline. Understanding trends in species and their drivers is key to knowing the size of the challenge, its causes and how to address it. To identify solutions that lead to sustainable biodiversity alongside economic prosperity, insect monitoring should be efficient and provide standardised and frequently updated status indicators to guide conservation actions. The EU Biodiversity Strategy 2030 identifies the critical challenge of delivering standardised information about the state of nature and image-based insect AI can contribute to this. Specifically, the EU Nature Restoration Law will likely set binding targets for the high resolution data that cameras can provide. Thus, outputs of the Action will contribute directly to EU policies implementation, where biodiversity monitoring is considered a key component. The InsectAI COST Action will organise workshops, conferences, short-term scientific missions, hackathons, design-sprints and much more, across four Working Groups. These groups will address how image-based insect AI technologies can best address Societal Needs, support innovation in Image Collection hardware, create standardised approaches for Image Processing and develop novel Data Analysis and Integration methods for turning data into actionable insights.
The Butterfly Effect: Neural Network Training Trajectories Are Highly Sensitive to Initial Conditions
Neural network training is inherently sensitive to initialization and the randomness induced by stochastic gradient descent. However, it is … (voir plus)unclear to what extent such effects lead to meaningfully different networks, either in terms of the models' weights or the underlying functions that were learned. In this work, we show that during the initial "chaotic" phase of training, even extremely small perturbations reliably causes otherwise identical training trajectories to diverge-an effect that diminishes rapidly over training time. We quantify this divergence through (i)
Tree semantic segmentation from aerial image time series
Earth's forests play an important role in the fight against climate change, and are in turn negatively affected by it. Effective monitoring … (voir plus)of different tree species is essential to understanding and improving the health and biodiversity of forests. In this work, we address the challenge of tree species identification by performing semantic segmentation of trees using an aerial image dataset spanning over a year. We compare models trained on single images versus those trained on time series to assess the impact of tree phenology on segmentation performances. We also introduce a simple convolutional block for extracting spatio-temporal features from image time series, enabling the use of popular pretrained backbones and methods. We leverage the hierarchical structure of tree species taxonomy by incorporating a custom loss function that refines predictions at three levels: species, genus, and higher-level taxa. Our findings demonstrate the superiority of our methodology in exploiting the time series modality and confirm that enriching labels using taxonomic information improves the semantic segmentation performance.
Insect Identification in the Wild: The AMI Dataset
F. Cunha
M. J. Bunsen
L. Pasi
N. Pinoy
Flemming Helsing
JoAnne Russo
Marc Botham
Michael Sabourin
Jonathan Fréchette
Alexandre Anctil
Yacksecari Lopez
Eduardo Navarro
Filonila Perez Pimentel
Ana Cecilia Zamora
José Alejandro Ramirez Silva
Jonathan Gagnon
Tom August
K. Bjerge … (voir 8 de plus)
Alba Gomez Segura
Marc Bélisle
Yves Basset
K. P. McFarland
David Roy
Toke Thomas Høye
Maxim Larrivée
Insects represent half of all global biodiversity, yet many of the world's insects are disappearing, with severe implications for ecosystems… (voir plus) and agriculture. Despite this crisis, data on insect diversity and abundance remain woefully inadequate, due to the scarcity of human experts and the lack of scalable tools for monitoring. Ecologists have started to adopt camera traps to record and study insects, and have proposed computer vision algorithms as an answer for scalable data processing. However, insect monitoring in the wild poses unique challenges that have not yet been addressed within computer vision, including the combination of long-tailed data, extremely similar classes, and significant distribution shifts. We provide the first large-scale machine learning benchmarks for fine-grained insect recognition, designed to match real-world tasks faced by ecologists. Our contributions include a curated dataset of images from citizen science platforms and museums, and an expert-annotated dataset drawn from automated camera traps across multiple continents, designed to test out-of-distribution generalization under field conditions. We train and evaluate a variety of baseline algorithms and introduce a combination of data augmentation techniques that enhance generalization across geographies and hardware setups.
Causal Representation Learning in Temporal Data via Single-Parent Decoding
Scientific research often seeks to understand the causal structure underlying high-level variables in a system. For example, climate scienti… (voir plus)sts study how phenomena, such as El Niño, affect other climate processes at remote locations across the globe. However, scientists typically collect low-level measurements, such as geographically distributed temperature readings. From these, one needs to learn both a mapping to causally-relevant latent variables, such as a high-level representation of the El Niño phenomenon and other processes, as well as the causal model over them. The challenge is that this task, called causal representation learning, is highly underdetermined from observational data alone, requiring other constraints during learning to resolve the indeterminacies. In this work, we consider a temporal model with a sparsity assumption, namely single-parent decoding: each observed low-level variable is only affected by a single latent variable. Such an assumption is reasonable in many scientific applications that require finding groups of low-level variables, such as extracting regions from geographically gridded measurement data in climate research or capturing brain regions from neural activity data. We demonstrate the identifiability of the resulting model and propose a differentiable method, Causal Discovery with Single-parent Decoding (CDSD), that simultaneously learns the underlying latents and a causal graph over them. We assess the validity of our theoretical results using simulated data and showcase the practical validity of our method in an application to real-world data from the climate science field.
Pushing the frontiers in climate modelling and analysis with machine learning
Veronika Eyring
William D. Collins
Pierre Gentine
Elizabeth A. Barnes
Marcelo Barreiro
Tom Beucler
Marc Bocquet
Christopher S. Bretherton
Hannah M. Christensen
Katherine Dagon
David John Gagne
David Hall
Dorit Hammerling
Stephan Hoyer
Fernando Iglesias-Suarez
Ignacio Lopez-Gomez
Marie C. McGraw
Gerald A. Meehl
Maria J. Molina
Claire Monteleoni … (voir 9 de plus)
Juliane Mueller
Michael S. Pritchard
Jakob Runge
Philip Stier
Oliver Watt-Meyer
Katja Weigel
Rose Yu
Laure Zanna
Evaluating the transferability potential of deep learning models for climate downscaling
Ayush Prasad
Prasanna Sattegeri
D. Szwarcman
Campbell Watson
Climate downscaling, the process of generating high-resolution climate data from low-resolution simulations, is essential for understanding … (voir plus)and adapting to climate change at regional and local scales. Deep learning approaches have proven useful in tackling this problem. However, existing studies usually focus on training models for one specific task, location and variable, which are therefore limited in their generalizability and transferability. In this paper, we evaluate the efficacy of training deep learning downscaling models on multiple diverse climate datasets to learn more robust and transferable representations. We evaluate the effectiveness of architectures zero-shot transferability using CNNs, Fourier Neural Operators (FNOs), and vision Transformers (ViTs). We assess the spatial, variable, and product transferability of downscaling models experimentally, to understand the generalizability of these different architecture types.
Stealing part of a production language model
Nicholas Carlini
Daniel Paleka
Krishnamurthy Dj Dvijotham
Thomas Steinke
Jonathan Hayase
A. Feder Cooper
Katherine Lee
Matthew Jagielski
Milad Nasr
Arthur Conmy
Eric Wallace
Florian Tramèr
We introduce the first model-stealing attack that extracts precise, nontrivial information from black-box production language models like … (voir plus)OpenAI's ChatGPT or Google's PaLM-2. Specifically, our attack recovers the embedding projection layer (up to symmetries) of a transformer model, given typical API access. For under \\
Improving Molecular Modeling with Geometric GNNs: an Empirical Study
Fragkiskos D. Malliaros
Alexandre AGM Duval
The Butterfly Effect: Tiny Perturbations Cause Neural Network Training to Diverge
Neural network training begins with a chaotic phase in which the network is sensitive to small perturbations, such as those caused by stocha… (voir plus)stic gradient descent (SGD). This sensitivity can cause identically initialized networks to diverge both in parameter space and functional similarity. However, the exact degree to which networks are sensitive to perturbation, and the sensitivity of networks as they transition out of the chaotic phase, is unclear. To address this uncertainty, we apply a controlled perturbation at a single point in training time and measure its effect on otherwise identical training trajectories. We find that both the
A machine learning pipeline for automated insect monitoring
F. Cunha
M. J. Bunsen
L. Pasi
Maxim Larrivée
Climate change and other anthropogenic factors have led to a catastrophic decline in insects, endangering both biodiversity and the ecosyste… (voir plus)m services on which human society depends. Data on insect abundance, however, remains woefully inadequate. Camera traps, conventionally used for monitoring terrestrial vertebrates, are now being modified for insects, especially moths. We describe a complete, open-source machine learning-based software pipeline for automated monitoring of moths via camera traps, including object detection, moth/non-moth classification, fine-grained identification of moth species, and tracking individuals. We believe that our tools, which are already in use across three continents, represent the future of massively scalable data collection in entomology.