Mila > Publication > Reinforcement learning & Planning > Hierarchical POMDP Controller Optimization by Likelihood Maximization

Hierarchical POMDP

Reinforcement learning & Planning
Jul 2008

Hierarchical POMDP Controller Optimization by Likelihood Maximization

Jul 2008

Planning can often be simplified by decomposing the task into smaller tasks arranged hierarchically. Charlin et al. [4] recently showed that the hierarchy discovery problem can be framed as a non-convex optimization problem. However, the inherent computational difficulty of solving such an optimization problem makes it hard to scale to real world problems. In another line of research, Toussaint et al. [18] developed a method to solve planning problems by maximum likelihood estimation. In this paper, we show how the hierarchy discovery problem in partially observable domains can be tackled using a similar maximum likelihood approach. Our technique first transforms the problem into a dynamic Bayesian network through which a hierarchical structure can naturally be discovered while optimizing the policy. Experimental results demonstrate that this approach scales better than previous techniques based on non-convex optimization.

Reference

Marc Toussaint, Laurent Charlin, Pascal Poupart, Hierarchical POMDP Controller Optimization by Likelihood Maximization, in: Proceedings of the Twenty-Fourth Conference Annual Conference on Uncertainty in Artificial Intelligence, 2008

Linked Profiles