No full text
Unpublished conference/Abstract (Scientific congresses, symposiums and conference proceedings)
Stochastic POMDP controllers: How easy to optimize?
Vlassis, Nikos; Littman, M. L.; Barber, D.
201210th European Workshop on Reinforcement Learning
 

Files


Full Text
No document available.

Send to



Details



Keywords :
Markov decision process; POMDP; stochastic controller; computational complexity; NP-hardness
Abstract :
[en] It was recently shown that computing an optimal stochastic controller in a discounted in finite-horizon partially observable Markov decision process is an NP-hard problem. The reduction (from the independent-set problem) involves designing an MDP with special state-action rewards. In this note, we show that the case of state-only-dependent rewards is also NP-hard.
Research center :
Luxembourg Centre for Systems Biomedicine (LCSB): Machine Learning (Vlassis Group)
Disciplines :
Computer science
Identifiers :
UNILU:UL-CONFERENCE-2012-294
Author, co-author :
Vlassis, Nikos ;  University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB)
Littman, M. L.
Barber, D.
External co-authors :
yes
Language :
English
Title :
Stochastic POMDP controllers: How easy to optimize?
Publication date :
2012
Event name :
10th European Workshop on Reinforcement Learning
Event place :
Edinburgh, United Kingdom
Event date :
2012
Audience :
International
References of the abstract :
http://ewrl.wordpress.com/ewrl10-2012/
Available on ORBilu :
since 24 June 2013

Statistics


Number of views
96 (14 by Unilu)
Number of downloads
0 (0 by Unilu)

Bibliography


Similar publications



Contact ORBilu