Paper published in a book (Scientific congresses, symposiums and conference proceedings)
Sparse Cooperative Q-learning
Kok, Jelle R.; Vlassis, Nikos
2004In Proc. 21st Int. Conf. on Machine Learning, Banff, Canada,
Peer reviewed
 

Files


Full Text
download.pdf
Author postprint (168.12 kB)
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Abstract :
[en] Learning in multiagent systems suffers from the fact that both the state and the action space scale exponentially with the number of agents. In this paper we are interested in using Q-learning to learn the coordinated actions of a group of cooperative agents, using a sparse representation of the joint stateaction space of the agents. We first examine a compact representation in which the agents need to explicitly coordinate their actions only in a predefined set of states. Next, we use a coordination-graph approach in which we represent the Q-values by value rules that specify the coordination dependencies of the agents at particular states. We show how Q-learning can be efficiently applied to learn a coordinated policy for the agents in the above framework. We demonstrate the proposed method on the predator-prey domain, and we compare it with other related multiagent Q-learning methods.
Disciplines :
Computer science
Identifiers :
UNILU:UL-ARTICLE-2011-731
Author, co-author :
Kok, Jelle R.
Vlassis, Nikos ;  University of Luxembourg > Luxembourg Centre for Systems Biomedicine (LCSB)
Language :
English
Title :
Sparse Cooperative Q-learning
Publication date :
2004
Event name :
Proc. 21st Int. Conf. on Machine Learning, Banff, Canada,
Event date :
2004
Main work title :
Proc. 21st Int. Conf. on Machine Learning, Banff, Canada,
Pages :
481-488
Peer reviewed :
Peer reviewed
Available on ORBilu :
since 17 November 2013

Statistics


Number of views
40 (0 by Unilu)
Number of downloads
675 (0 by Unilu)

Scopus citations®
 
77
Scopus citations®
without self-citations
72

Bibliography


Similar publications



Contact ORBilu