References of "Mostaani, Arsham 50034660"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailState Aggregation for Multiagent Communication over Rate-Limited Channels
Mostaani, Arsham UL; Vu, Thang Xuan UL; Chatzinotas, Symeon UL et al

in State Aggregation for Multiagent Communication over Rate-Limited Channels (2020, December)

A collaborative task is assigned to a multiagent system (MAS) in which agents are allowed to communicate. The MAS runs over an underlying Markov decision process and its task is to maximize the averaged ... [more ▼]

A collaborative task is assigned to a multiagent system (MAS) in which agents are allowed to communicate. The MAS runs over an underlying Markov decision process and its task is to maximize the averaged sum of discounted one-stage rewards. Although knowing the global state of the environment is necessary for the optimal action selection of the MAS, agents are limited to individual observations. The inter-agent communication can tackle the issue of local observability, however, the limited rate of the inter-agent communication prevents the agent from acquiring the precise global state information. To overcome this challenge, agents need to communicate their observations in a compact way such that the MAS compromises the minimum possible sum of rewards. We show that this problem is equivalent to a form of rate-distortion problem which we call the task-based information compression. State Aggregation for Information Compression (SAIC) is introduced here to perform the task-based information compression. The SAIC is shown, conditionally, to be capable of achieving the optimal performance in terms of the attained sum of discounted rewards. The proposed algorithm is applied to a rendezvous problem and its performance is compared with two benchmarks; (i) conventional source coding algorithms and the (ii) centralized multiagent control using reinforcement learning. Numerical experiments confirm the superiority and fast convergence of the proposed SAIC. [less ▲]

Detailed reference viewed: 38 (9 UL)
Full Text
Peer Reviewed
See detailLearning-based Physical Layer Communications for Multiagent Collaboration
Mostaani, Arsham UL; Simeone, Osvaldo; Chatzinotas, Symeon UL et al

in Mostaani, Arsham; Simeone, Osvaldo; Chatzinotas, Symeon (Eds.) et al PIMRC 2019 Proceedings (2019, September 11)

Consider a collaborative task carried out by two autonomous agents that can communicate over a noisy channel. Each agent is only aware of its own state, while the accomplishment of the task depends on the ... [more ▼]

Consider a collaborative task carried out by two autonomous agents that can communicate over a noisy channel. Each agent is only aware of its own state, while the accomplishment of the task depends on the value of the joint state of both agents. As an example, both agents must simultaneously reach a certain location of the environment, while only being aware of their own positions. Assuming the presence of feedback in the form of a common reward to the agents, a conventional approach would apply separately: (\emph{i}) an off-the-shelf coding and decoding scheme in order to enhance the reliability of the communication of the state of one agent to the other; and (\emph{ii}) a standard multiagent reinforcement learning strategy to learn how to act in the resulting environment. In this work, it is argued that the performance of the collaborative task can be improved if the agents learn how to jointly communicate and act. In particular, numerical results for a baseline grid world example demonstrate that the jointly learned policy carries out compression and unequal error protection by leveraging information about the action policy. [less ▲]

Detailed reference viewed: 106 (17 UL)