Portrait de Wesley Chung n'est pas disponible

Wesley Chung

Doctorat - McGill
Superviseur⋅e principal⋅e
Co-supervisor
Sujets de recherche
Apprentissage en ligne
Apprentissage par renforcement
Apprentissage profond
Optimisation

Publications

Parseval Regularization for Continual Reinforcement Learning
Loss of plasticity, trainability loss, and primacy bias have been identified as issues arising when training deep neural networks on sequenc… (voir plus)es of tasks -- all referring to the increased difficulty in training on new tasks. We propose to use Parseval regularization, which maintains orthogonality of weight matrices, to preserve useful optimization properties and improve training in a continual reinforcement learning setting. We show that it provides significant benefits to RL agents on a suite of gridworld, CARL and MetaWorld tasks. We conduct comprehensive ablations to identify the source of its benefits and investigate the effect of certain metrics associated to network trainability including weight matrix rank, weight norms and policy entropy.
Beyond Variance Reduction: Understanding the True Impact of Baselines on Policy Optimization
Valentin Thomas
Marlos C. Machado
Bandit and reinforcement learning (RL) problems can often be framed as optimization problems where the goal is to maximize average performan… (voir plus)ce while having access only to stochastic estimates of the true gradient. Traditionally, stochastic optimization theory predicts that learning dynamics are governed by the curvature of the loss function and the noise of the gradient estimates. In this paper we demonstrate that this is not the case for bandit and RL problems. To allow our analysis to be interpreted in light of multi-step MDPs, we focus on techniques derived from stochastic optimization principles (e.g., natural policy gradient and EXP3) and we show that some standard assumptions from optimization theory are violated in these problems. We present theoretical results showing that, at least for bandit problems, curvature and noise are not sufficient to explain the learning dynamics and that seemingly innocuous choices like the baseline can determine whether an algorithm converges. These theoretical findings match our empirical evaluation, which we extend to multi-state MDPs.