Mila > Publication > Reinforcement Learning > Leveraging exploration in off-policy algorithms via normalizing flows

Leveraging exploration in off-policy algorithms via normalizing flows

Reinforcement Learning
Oct 2019

Leveraging exploration in off-policy algorithms via normalizing flows

Oct 2019

The ability to discover approximately optimal policies in domains with sparse rewards is crucial to applying reinforcement learning (RL) in many realworld scenarios. Approaches such as neural density models and continuous exploration (e.g., Go-Explore) have been proposed to maintain the high exploration rate necessary to find high performing and generalizable policies. Soft actor-critic (SAC) is another method for improving exploration that aims to combine efficient learning via off-policy updates, while maximizing the policy entropy. In this work, we extend SAC to a richer class of probability distributions (e.g., multimodal) through normalizing flows (NF) and show that this significantly improves performance by accelerating discovery of good policies while using much smaller policy representations. Our approach, which we call SAC-NF, is a simple, efficient, easy-to-implement modification and improvement to SAC on continuous control baselines such as MuJoCo and PyBullet Roboschool domains. Finally, SAC-NF does this while being significantly parameter efficient, using as few as 5.5% the parameters for an equivalent SAC model.

Bogdan Mazoure, Thang Doan, Audrey Durand, Joelle Pineau, R Devon Hjelm. Leveraging exploration in off-policy algorithms via normalizing flows, 2019.

Reference

Linked Profiles