Portrait of Antoine Lesage-Landry

Antoine Lesage-Landry

Associate Academic Member
Associate Professor, Polytechnique Montréal, Department of Electrical Engineering
Research Topics
Online Learning
Optimization

Biography

I am an Associate professor in the Department of Electrical Engineering at Polytechnique Montréal. I received my BEng degree in engineering physics from Polytechnique Montréal in 2015, and my PhD degree in electrical engineering from the University of Toronto in 2019. I was a postdoctoral scholar in the Energy & Resources Group at the University of California, Berkeley, from 2019 to 2020. My research interests include optimization, online learning and machine learning, and their application to power systems with renewable generation.

Current Students

Master's Research - Polytechnique Montréal
Master's Research - Polytechnique Montréal
Master's Research - Polytechnique Montréal
Master's Research - Polytechnique Montréal
Co-supervisor :
PhD - Université du Québec à Rimouski
Master's Research - Polytechnique Montréal
Master's Research - Polytechnique Montréal
Master's Research - Polytechnique Montréal
Master's Research - Polytechnique Montréal
PhD - Polytechnique Montréal
Master's Research - Polytechnique Montréal
PhD - Polytechnique Montréal
Master's Research - Polytechnique Montréal
Master's Research - Polytechnique Montréal
Master's Research - Polytechnique Montréal
PhD - Polytechnique Montréal
Master's Research - Polytechnique Montréal
Research Intern - Polytechnique Montréal

Publications

Multi-Agent Reinforcement Learning for Fast-Timescale Demand Response of Residential Loads
Vincent Mai
Philippe Maisonneuve
Tianyu Zhang
Hadi Nekoei
To integrate high amounts of renewable energy resources, electrical power grids must be able to cope with high amplitude, fast timescale var… (see more)iations in power generation. Frequency regulation through demand response has the potential to coordinate temporally flexible loads, such as air conditioners, to counteract these variations. Existing approaches for discrete control with dynamic constraints struggle to provide satisfactory performance for fast timescale action selection with hundreds of agents. We propose a decentralized agent trained with multi-agent proximal policy optimization with localized communication. We explore two communication frameworks: hand-engineered, or learned through targeted multi-agent communication. The resulting policies perform well and robustly for frequency regulation, and scale seamlessly to arbitrary numbers of houses for constant processing times.
An Online Newton’s Method for Time-Varying Linear Equality Constraints
Jean-Luc Lupien
We consider online optimization problems with time-varying linear equality constraints. In this framework, an agent makes sequential decisio… (see more)ns using only prior information. At every round, the agent suffers an environment-determined loss and must satisfy time-varying constraints. Both the loss functions and the constraints can be chosen adversarially. We propose the Online Projected Equality-constrained Newton Method (OPEN-M) to tackle this family of problems. We obtain sublinear dynamic regret and constraint violation bounds for OPEN-M under mild conditions. Namely, smoothness of the loss function and boundedness of the inverse Hessian at the optimum are required, but not convexity. Finally, we show OPEN-M outperforms state-of-the-art online constrained optimization algorithms in a numerical network flow application.