Reference : Point-Based Value Iteration for Continuous POMDPs
Scientific journals : Article
Engineering, computing & technology : Computer science
Point-Based Value Iteration for Continuous POMDPs
Porta, Josep M. [> >]
Vlassis, Nikos mailto [University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB) > >]
Spaan, Matthijs T. J. [> >]
Poupart, Pascal [> >]
Journal of Machine Learning Research
Yes (verified by ORBilu)
[en] We propose a novel approach to optimize Partially Observable Markov Decisions Processes (POMDPs) defined on continuous spaces. To date, most algorithms for model-based POMDPs are restricted to discrete states, actions, and observations, but many real-world problems such as, for instance, robot navigation, are naturally defined on continuous spaces. In this work, we demonstrate that the value function for continuous POMDPs is convex in the beliefs over continuous state spaces, and piecewise-linear convex for the particular case of discrete observations and actions but still continuous states. We also demonstrate that continuous Bellman backups are contracting and isotonic ensuring the monotonic convergence of value-iteration algorithms. Relying on those properties, we extend the algorithm, originally developed for discrete POMDPs, to work in continuous state spaces by representing the observation, transition, and reward models using Gaussian mixtures, and the beliefs using Gaussian mixtures or particle sets. With these representations, the integrals that appear in the Bellman backup can be computed in closed form and, therefore, the algorithm is computationally feasible. Finally, we further extend to deal with continuous action and observation sets by designing effective sampling approaches.

File(s) associated to this reference

Fulltext file(s):

Open access
download.pdf postprint515.49 kBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.