Portrait of David Rolnick

David Rolnick

Core Academic Member
Canada CIFAR AI Chair
Assistant Professor, McGill University, School of Computer Science
Adjunct Professor, Université de Montréal, Department of Computer Science and Operations Research
Research Topics
AI and Sustainability
AI for Science
Applied Machine Learning
Biodiversity
Building Energy Management Systems
Climate
Climate Change
Climate Change AI
Climate Modeling
Climate Science
Climate Variable Downscaling
Computer Vision
Conservation Technology
Energy Systems
Forest Monitoring
Machine Learning and Climate Change
Machine Learning for Physical Sciences
Machine Learning in Climate Modeling
Machine Learning Theory
Out-of-Distribution (OOD) Detection
Remote Sensing
Satellite Remote Sensing
Time Series Forecasting
Vegetation

Biography

David Rolnick is an assistant professor at McGill University’s School of Computer Science, a core academic member of Mila – Quebec Artificial Intelligence Institute and holds a Canada CIFAR AI Chair. Rolnick’s work focuses on applications of machine learning to help address climate change. He is the co-founder and chair of Climate Change AI, and scientific co-director of Sustainability in the Digital Age. After completing his PhD in applied mathematics at the Massachusetts Institute of Technology (MIT), he was a NSF Mathematical Sciences Postdoctoral Research Fellow, an NSF Graduate Research Fellow and a Fulbright Scholar. He was named to MIT Technology Review’s “35 Innovators Under 35” in 2021.

Current Students

Collaborating researcher
Collaborating Alumni - McGill University
Collaborating researcher - Cambridge University
Co-supervisor :
Postdoctorate - McGill University
Collaborating researcher - McGill University
Collaborating researcher - N/A
Co-supervisor :
Master's Research - McGill University
Collaborating researcher - Leipzig University
Collaborating researcher
Collaborating researcher
Collaborating researcher
Independent visiting researcher - Politecnico di Milano
Independent visiting researcher
Collaborating researcher - Université de Montréal
Collaborating researcher - Johannes Kepler University
Collaborating researcher - University of Amsterdam
Master's Research - McGill University
PhD - McGill University
PhD - McGill University
Collaborating researcher
Independent visiting researcher - Université de Montréal
Collaborating researcher - University of East Anglia
Collaborating researcher
Collaborating researcher - Columbia university
Master's Research - McGill University
Postdoctorate - McGill University
Co-supervisor :
PhD - University of Waterloo
Co-supervisor :
Collaborating Alumni - Université de Montréal
Master's Research - McGill University
Collaborating researcher - Columbia university
Master's Research - McGill University
Collaborating researcher - University of Tübingen
Collaborating researcher - Karlsruhe Institute of Technology
PhD - McGill University
Postdoctorate - Université de Montréal
Principal supervisor :
Collaborating researcher
PhD - McGill University
Collaborating Alumni - McGill University

Publications

Understanding the Evolution of Linear Regions in Deep Reinforcement Learning
Setareh Cohan
Nam Hee Gordon Kim
Michiel van de Panne
Policies produced by deep reinforcement learning are typically characterised by their learning curves, but they remain poorly understood in … (see more)many other respects. ReLU-based policies result in a partitioning of the input space into piecewise linear regions. We seek to understand how observed region counts and their densities evolve during deep reinforcement learning using empirical results that span a range of continuous control tasks and policy network dimensions. Intuitively, we may expect that during training, the region density increases in the areas that are frequently visited by the policy, thereby affording fine-grained control. We use recent theoretical and empirical results for the linear regions induced by neural networks in supervised learning settings for grounding and comparison of our results. Empirically, we find that the region density increases only moderately throughout training, as measured along fixed trajectories coming from the final policy. However, the trajectories themselves also increase in length during training, and thus the region densities decrease as seen from the perspective of the current trajectory. Our findings suggest that the complexity of deep reinforcement learning policies does not principally emerge from a significant growth in the complexity of functions observed on-and-around trajectories of the policy.
Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks
Christopher Hillar
Tenzin Chan
Rachel Taubman
In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthr… (see more)oughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and fixed-point attractor dynamics. Specifically, we explore minimum energy flow (MEF) as a scalable convex objective for determining network parameters. We catalog various properties of MEF, such as biological plausibility, and then compare to classical approaches in the theory of learning. Trained Hopfield networks can perform unsupervised clustering and define novel error-correcting coding schemes. They also efficiently find hidden structures (cliques) in graph theory. We extend this known connection from graphs to hypergraphs and discover n-node networks with robust storage of 2Ω(n1−ϵ) memories for any ϵ>0. In the case of graphs, we also determine a critical ratio of training samples at which networks generalize completely.