Mila > Publication > Reinforcement Learning > Variational State Encoding as Intrinsic Motivation in Reinforcement Learning

Variational State Encoding as Intrinsic Motivation in Reinforcement Learning

Reinforcement Learning
May 2019

Variational State Encoding as Intrinsic Motivation in Reinforcement Learning

May 2019

Discovering efficient exploration strategies is a central challenge in reinforcement learning (RL), especially in the context of sparse rewards environments. We postulate that to discover such strategies, an RL agent should be able to identify surprising, and potentially useful, states where the agent encounters meaningful information that deviates from its prior beliefs of the environment. Intuitively, this approach could be understood as leveraging a measure of an agent’s surprise to guide exploration. To this end, we provide a straightforward mechanism by training a variational auto-encoder to extract the latent structure of the task. Importantly, variational auto-encoders maintain a posterior distribution over this latent structure. By measuring the difference between this distribution and the agent’s prior beliefs, we are able to identify states which can hold meaningful information. Leveraging this as a measure of intrinsic motivation, we empirically demonstrate that an agent can solve a series of challenging sparse reward, highly stochastic and partially observable maze tasks. We also perform experiments on continuous control tasks with dense rewards and show improved performance in most cases.

Reference

https://tarl2019.github.io/assets/papers/klissarov2019variational.pdf

PDF

Linked Profiles