Reference : Decentralized planning under uncertainty for teams of communicating agents
Scientific congresses, symposiums and conference proceedings : Paper published in a book
Engineering, computing & technology : Computer science
http://hdl.handle.net/10993/11039
Decentralized planning under uncertainty for teams of communicating agents
English
Spaan, Matthijs T. J. [> >]
Gordon, Geoffrey J. [> >]
Vlassis, Nikos mailto [University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB) > >]
2006
Proc. Int. Joint Conf. on Autonomous Agents and Multiagent Systems, Hakodate, Japan
249-256
Yes
Int. Joint Conf. on Autonomous Agents and Multiagent Systems, Hakodate, Japan
2006
[en] Decentralized partially observable Markov decision processes (DEC-POMDPs) form a general framework for planning for groups of cooperating agents that inhabit a stochastic and partially observable environment. Unfortunately, computing optimal plans in a DEC-POMDP has been shown to be intractable (NEXP-complete), and approximate algorithms for specific subclasses have been proposed. Many of these algorithms rely on an (approximate) solution of the centralized planning problem (i.e., treating the whole team as a single agent). We take a more decentralized approach, in which each agent only reasons over its own local state and some uncontrollable state features, which are shared by all team members. In contrast to other approaches, we model communication as an integral part of the agent's reasoning, in which the meaning of a message is directly encoded in the policy of the communicating agent. We explore iterative methods for approximately solving such models, and we conclude with some encouraging preliminary experimental results.
http://hdl.handle.net/10993/11039

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
download.pdfAuthor postprint183.62 kBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.