Reference : Sparse Cooperative Q-learning
Scientific congresses, symposiums and conference proceedings : Paper published in a book
Engineering, computing & technology : Computer science
http://hdl.handle.net/10993/11050
Sparse Cooperative Q-learning
English
Kok, Jelle R. [> >]
Vlassis, Nikos mailto [University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB) > >]
2004
Proc. 21st Int. Conf. on Machine Learning, Banff, Canada,
481-488
Yes
Proc. 21st Int. Conf. on Machine Learning, Banff, Canada,
2004
[en] Learning in multiagent systems suffers from the fact that both the state and the action space scale exponentially with the number of agents. In this paper we are interested in using Q-learning to learn the coordinated actions of a group of cooperative agents, using a sparse representation of the joint stateaction space of the agents. We first examine a compact representation in which the agents need to explicitly coordinate their actions only in a predefined set of states. Next, we use a coordination-graph approach in which we represent the Q-values by value rules that specify the coordination dependencies of the agents at particular states. We show how Q-learning can be efficiently applied to learn a coordinated policy for the agents in the above framework. We demonstrate the proposed method on the predator-prey domain, and we compare it with other related multiagent Q-learning methods.
http://hdl.handle.net/10993/11050

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
download.pdfAuthor postprint164.18 kBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.