Portrait of Courtney Paquette

Courtney Paquette

Associate Academic Member
Canada CIFAR AI Chair
Assistant Professor, McGill University, Department of Mathematics and Statistics
Research Scientist, Google Brain
Research Topics
Optimization

Biography

Courtney Paquette is an assistant professor at McGill University and a Canada CIFAR AI Chair at Mila – Quebec Artificial Intelligence Institute.

Her research focuses on designing and analyzing algorithms for large-scale optimization problems, motivated by applications in data science.

She received her PhD in mathematics from the University of Washington (2017), held postdoctoral positions at Lehigh University (2017–2018) and the University of Waterloo (NSF postdoctoral fellowship, 2018–2019), and was a research scientist at Google Brain in Montréal (2019–2020).

Current Students

Master's Research - McGill University
Postdoctorate - McGill University
Master's Research - McGill University
Master's Research - McGill University
PhD - McGill University
Master's Research - McGill University
PhD - McGill University

Publications

Implicit Regularization or Implicit Conditioning? Exact Risk Trajectories of SGD in High Dimensions
Elliot Paquette
Ben Adlam
Jeffrey Pennington
Stochastic gradient descent (SGD) is a pillar of modern machine learning, serving as the go-to optimization algorithm for a diverse array of… (see more) problems. While the empirical success of SGD is often attributed to its computational efficiency and favorable generalization behavior, neither effect is well understood and disentangling them remains an open problem. Even in the simple setting of convex quadratic problems, worst-case analyses give an asymptotic convergence rate for SGD that is no better than full-batch gradient descent (GD), and the purported implicit regularization effects of SGD lack a precise explanation. In this work, we study the dynamics of multi-pass SGD on high-dimensional convex quadratics and establish an asymptotic equivalence to a stochastic differential equation, which we call homogenized stochastic gradient descent (HSGD), whose solutions we characterize explicitly in terms of a Volterra integral equation. These results yield precise formulas for the learning and risk trajectories, which reveal a mechanism of implicit conditioning that explains the efficiency of SGD relative to GD. We also prove that the noise from SGD negatively impacts generalization performance, ruling out the possibility of any type of implicit regularization in this context. Finally, we show how to adapt the HSGD formalism to include streaming SGD, which allows us to produce an exact prediction for the excess risk of multi-pass SGD relative to that of streaming SGD (bootstrap risk).
Trajectory of Mini-Batch Momentum: Batch Size Saturation and Convergence in High Dimensions
Kiwon Lee
Andrew Nicholas Cheng
Elliot Paquette
Halting Time is Predictable for Large Models: A Universality Property and Average-Case Analysis
Bart van Merriënboer
Fabian Pedregosa