![]() Ortiz Gomez, Flor de Guadalupe ![]() ![]() ![]() in Towards the Application of Neuromorphic Computing to Satellite Communications (2022, October) Artificial intelligence (AI) has recently received significant attention as a key enabler for future 5G-and-beyond terrestrial wireless networks. The applications of AI to satellite communications is also ... [more ▼] Artificial intelligence (AI) has recently received significant attention as a key enabler for future 5G-and-beyond terrestrial wireless networks. The applications of AI to satellite communications is also gaining momentum to realize a more autonomous operation with reduced requirements in terms of human intervention. The adoption of AI for satellite communications will set new requirements on computing processors, which will need to support large workloads as efficiently as possible under harsh environmental conditions. In this context, neuromorphic processing (NP) is emerging as a bio-inspired solution to address pattern recognition tasks involving multiple, possibly unstructured, temporal signals and/or requiring continual learning. The key merits of the technology are energy efficiency and capacity for on-device adaptation. In this paper, we highlight potential use cases and applications of NP to satellite communications. We also explore major technical challenges for the implementation of space-based NP focusing on the available NP chipsets. [less ▲] Detailed reference viewed: 233 (31 UL)![]() Mostaani, Arsham ![]() ![]() in Mostaani, Arsham; Simeone, Osvaldo; Chatzinotas, Symeon (Eds.) et al PIMRC 2019 Proceedings (2019, September 11) Consider a collaborative task carried out by two autonomous agents that can communicate over a noisy channel. Each agent is only aware of its own state, while the accomplishment of the task depends on the ... [more ▼] Consider a collaborative task carried out by two autonomous agents that can communicate over a noisy channel. Each agent is only aware of its own state, while the accomplishment of the task depends on the value of the joint state of both agents. As an example, both agents must simultaneously reach a certain location of the environment, while only being aware of their own positions. Assuming the presence of feedback in the form of a common reward to the agents, a conventional approach would apply separately: (\emph{i}) an off-the-shelf coding and decoding scheme in order to enhance the reliability of the communication of the state of one agent to the other; and (\emph{ii}) a standard multiagent reinforcement learning strategy to learn how to act in the resulting environment. In this work, it is argued that the performance of the collaborative task can be improved if the agents learn how to jointly communicate and act. In particular, numerical results for a baseline grid world example demonstrate that the jointly learned policy carries out compression and unequal error protection by leveraging information about the action policy. [less ▲] Detailed reference viewed: 148 (23 UL) |
||