Portrait of Sanjay Thakur is unavailable

Sanjay Thakur

Alumni

Publications

Unifying Variational Inference and PAC-Bayes for Supervised Learning that Scales
Neural Network based controllers hold enormous potential to learn complex, high-dimensional functions. However, they are prone to overfittin… (see more)g and unwarranted extrapolations. PAC Bayes is a generalized framework which is more resistant to overfitting and that yields performance bounds that hold with arbitrarily high probability even on the unjustified extrapolations. However, optimizing to learn such a function and a bound is intractable for complex tasks. In this work, we propose a method to simultaneously learn such a function and estimate performance bounds that scale organically to high-dimensions, non-linear environments without making any explicit assumptions about the environment. We build our approach on a parallel that we draw between the formulations called ELBO and PAC Bayes when the risk metric is negative log likelihood. Through our experiments on multiple high dimensional MuJoCo locomotion tasks, we validate the correctness of our theory, show its ability to generalize better, and investigate the factors that are important for its learning. The code for all the experiments is available at this https URL.
Uncertainty Aware Learning from Demonstrations in Multiple Contexts using Bayesian Neural Networks
Herke van Hoof
Juan Camilo Gamboa Higuera
Diversity of environments is a key challenge that causes learned robotic controllers to fail due to the discrepancies between the training a… (see more)nd evaluation conditions. Training from demonstrations in various conditions can mitigate---but not completely prevent---such failures. Learned controllers such as neural networks typically do not have a notion of uncertainty that allows to diagnose an offset between training and testing conditions, and potentially intervene. In this work, we propose to use Bayesian Neural Networks, which have such a notion of uncertainty. We show that uncertainty can be leveraged to consistently detect situations in high-dimensional simulated and real robotic domains in which the performance of the learned controller would be sub-par. Also, we show that such an uncertainty based solution allows making an informed decision about when to invoke a fallback strategy. One fallback strategy is to request more data. We empirically show that providing data only when requested results in increased data-efficiency.