Portrait of Sabyasachi Sahoo is unavailable

Sabyasachi Sahoo

PhD - Université Laval
Supervisor

Publications

Layerwise Early Stopping for Test Time Adaptation
Sabyasachi Sahoo
Mostafa ElAraby
Jonas Ngnawe
Yann Pequignot
Frederic Precioso
GROOD: GRadient-aware Out-Of-Distribution detection in interpolated manifolds
Mostafa ElAraby
Sabyasachi Sahoo
Yann Pequignot
Paul Novello
Hessian Aware Low-Rank Weight Perturbation for Continual Learning
Jiaqi Li
Rui Wang
Yuanhao Lai
Changjian Shui
Sabyasachi Sahoo
Charles Ling
Shichun Yang
Boyu Wang
Fan Zhou
Continual learning aims to learn a series of tasks sequentially without forgetting the knowledge acquired from the previous ones. In this wo… (see more)rk, we propose the Hessian Aware Low-Rank Perturbation algorithm for continual learning. By modeling the parameter transitions along the sequential tasks with the weight matrix transformation, we propose to apply the low-rank approximation on the task-adaptive parameters in each layer of the neural networks. Specifically, we theoretically demonstrate the quantitative relationship between the Hessian and the proposed low-rank approximation. The approximation ranks are then globally determined according to the marginal increment of the empirical loss estimated by the layer-specific gradient and low-rank approximation error. Furthermore, we control the model capacity by pruning less important parameters to diminish the parameter growth. We conduct extensive experiments on various benchmarks, including a dataset with large-scale tasks, and compare our method against some recent state-of-the-art methods to demonstrate the effectiveness and scalability of our proposed method. Empirical results show that our method performs better on different benchmarks, especially in achieving task order robustness and handling the forgetting issue. The source code is at https://github.com/lijiaqi/HALRP.