Mila > Publication > Reinforcement Learning > Learning Causal State Representations of Partially Observable Environments

Learning Causal State Representations of Partially Observable Environments

Reinforcement Learning
Jun 2019

Learning Causal State Representations of Partially Observable Environments

Jun 2019

Intelligent agents can cope with sensory-rich environments by learning task-agnostic state abstractions. In this paper, we propose mechanisms to approximate causal states, which optimally compress the joint history of actions and observations in partially-observable Markov decision processes. Our proposed algorithm extracts causal state representations from RNNs that are trained to predict subsequent observations given the history. We demonstrate that these learned task-agnostic state abstractions can be used to efficiently learn policies for reinforcement learning problems with rich observation spaces. We evaluate agents using multiple partially observable navigation tasks with both discrete (GridWorld) and continuous (VizDoom, ALE) observation processes that cannot be solved by traditional memory-limited methods. Our experiments demonstrate systematic improvement of the DQN and tabular models using approximate causal state representations with respect to recurrent-DQN baselines trained with raw inputs.

Reference

https://arxiv.org/abs/1906.10437

Linked Profiles