Options of Interest: Temporal Abstraction with Interest Functions

Reinforcement Learning
Jan 2020

Options of Interest: Temporal Abstraction with Interest Functions

Jan 2020

Temporal abstraction refers to the ability of an agent to use behaviours of controllers which act for a limited, variable amount of time. The options framework describes such behaviours as consisting of a subset of states in which they can initiate, an internal policy and a stochastic termination condition. However, much of the subsequent work on option discovery has ignored the initiation set, because of difficulty in learning it from data. We provide a generalization of initiation sets suitable for general function approximation, by defining an interest function associated with an option. We derive a gradient-based learning algorithm for interest functions, leading to a new interest-option-critic architecture. We investigate how interest functions can be leveraged to learn interpretable and reusable temporal abstractions. We demonstrate the efficacy of the proposed approach through quantitative and qualitative results, in both discrete and continuous environments.

Reference

PDF

Source code

Linked Profiles

array(1) { ["wp-wpml_current_language"]=> string(2) "en" }