This program is designed to provide decision-makers, policymakers and professional working in policy with a foundational understanding of AI technology.
We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
This paper contributes a new approach for distributional reinforcement learning which elucidates
a clean separation of transition structure … (see more)and reward in the learning process. Analogous to how
the successor representation (SR) describes the expected consequences of behaving according to a
given policy, our distributional successor measure
(SM) describes the distributional consequences of
this behaviour. We formulate the distributional
SM as a distribution over distributions and provide theory connecting it with distributional and
model-based reinforcement learning. Moreover,
we propose an algorithm that learns the distributional SM from data by minimizing a two-level
maximum mean discrepancy. Key to our method
are a number of algorithmic techniques that are
independently valuable for learning generative
models of state. As an illustration of the usefulness of the distributional SM, we show that it
enables zero-shot risk-sensitive policy evaluation
in a way that was not previously possible.
Deep reinforcement learning agents for continuous control are known to exhibit significant instability in their performance over time. In th… (see more)is work, we provide a fresh perspective on these behaviors by studying the return landscape: the mapping between a policy and a return. We find that popular algorithms traverse noisy neighborhoods of this landscape, in which a single update to the policy parameters leads to a wide range of returns. By taking a distributional view of these returns, we map the landscape, characterizing failure-prone regions of policy space and revealing a hidden dimension of policy quality. We show that the landscape exhibits surprising structure by finding simple paths in parameter space which improve the stability of a policy. To conclude, we develop a distribution-aware procedure which finds such paths, navigating away from noisy neighborhoods in order to improve the robustness of a policy. Taken together, our results provide new insight into the optimization, evaluation, and design of agents.